Scielo RSS <![CDATA[Computación y Sistemas]]> http://www.scielo.org.mx/rss.php?pid=1405-554620160001&lang=pt vol. 20 num. 1 lang. pt <![CDATA[SciELO Logo]]> http://www.scielo.org.mx/img/en/fbpelogp.gif http://www.scielo.org.mx <![CDATA[Editorial]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100003&lng=pt&nrm=iso&tlng=pt <![CDATA[Unsupervised Machine Learning Application to Perform a Systematic Review and Meta-Analysis in Medical Research]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100007&lng=pt&nrm=iso&tlng=pt Abstract. When trying to synthesize information from multiple sources and perform a statistical review to compare them, particularly in the medical research field, several statistical tools are available, most common are the systematic review and the meta-analysis. These techniques allow the comparison of the effectiveness or success among a group of studies. However, a problem of these tools is that if the information to be compared is incomplete or mismatched between two or more studies, the comparison becomes an arduous task. On a parallel line, machine learning methodologies have been proven to be a reliable resource, such software is developed to classify several variables and learn from previous experiences to improve the classification. In this paper, we use unsupervised machine learning methodologies to describe a simple yet effective algorithm that, given a dataset with missing data, completes such data, which leads to a more complete systematic review and meta-analysis, capable of presenting a final effectiveness or success rating between studies. Our method is first validated in a movie ranking database scenario, and then used in a real life systematic review and meta-analysis of obesity prevention scientific papers, where 66.6% of the outcomes are missing. <![CDATA[Single-Camera Automatic Landmarking for People Recognition with an Ensemble of Regression Trees]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100019&lng=pt&nrm=iso&tlng=pt Abstract Active Appearance Model (AAM) is a computer vision procedure for statistical matching of object shape and appearance between images. A main drawback in this technique comes from the construction of the shape mesh. Since landmarks must be manually placed when training shapes, AAM is a very time consuming procedure and it cannot be automatically applied on new objects observed in the images. An approach for automatic landmarking of body shapes on still images for AAM training is introduced in this paper. Several works exist applying automatic landmarking on faces or body joints. Here, we explore the possibility to extend one of these methods to full body contours and demonstrate it is a plausible approach in terms of accuracy and speed measures in experimentation. Our proposal represents a new research line in human body pose tracking with a single-view camera. Hence, implementation in real-time would lead to people being recognized by robots endowed with minimal vision resources, like a webcam, in human-robot interaction tasks. <![CDATA[A Parallel Tool for Numerical Approximation of 3D Electromagnetic Surveys in Geophysics]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100029&lng=pt&nrm=iso&tlng=pt Abstract This paper presents an edge-based parallel code for the data computation that arises when applying one of the most popular electromagnetic methods in geophysics, namely, the controlled-source electromagnetic method (CSEM). The computational implementation is based on the linear Edge Finite Element Method in 3D isotropic domains because it has the ability to eliminate spurious solutions and is claimed to yield accurate results. The framework structure is able to exploit the embarrassingly-parallel tasks and the advantages of the geometric flexibility as well as to work with three different orientations for the dipole, or excitation source, on unstructured tetrahedral meshes in order to represent complex geological bodies through a local refinement technique. We demonstrate the performance and accuracy of our tool on the Marenostrum supercomputer (Barcelona Supercomputing Center) through scaling tests and canonical tests, respectively. <![CDATA[Design of a Speed Adaptive Controller for a PMSM using Artificial Intelligence]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100041&lng=pt&nrm=iso&tlng=pt Resumen. Los motores síncronos de imanes permanentes se han utilizado ampliamente en aplicaciones de velocidad variable de alta precisión, sin embargo, el esquema de control debe cumplir con altas exigencias de desempeno dinâmico. En este trabajo se realiza un análisis comparativo de la respuesta de un motor síncrono con cuatro estrategias de control: convencional proporcional integral, modos deslizantes, lógica difusa y redes neuronales. Se describe el modelo del motor, el regulador de corriente y se realiza el diseño del control. Además, se utiliza un observador no lineal para la estimación de la velocidad del rotor y el par de carga. El desempeno de cada controlador se analiza mediante simulaciones en el tiempo donde el motor se somete a diversas perturbaciones y cambios de referencia. La técnica de control propuesta mediante redes neuronales exhibe el mejor desempeno debido a que puede adaptarse a cada condición, demandando bajo costo computacional para una operación en línea y considerando las no linealidades del sistema.<hr/>Abstract. Permanent magnet synchronous motors have been widely used in variable speed drives; however, the control scheme must ensure high requirements of dynamic performance. In this work, a comparative analysis of a synchronous motor response with four control strategies-conventional proportional integral, sliding mode, fuzzy logic, and neural networks-is exposed. The motor model and the current controller are described; this allows the control laws design. In addition, a nonlinear observer for estimating the rotor speed and load torque is designed. The performance of each driver is analyzed using time simulations where the motor is subjected to disturbances and reference changes. The proposed control technique using neural networks exhibits the best performance because it can adapt to every condition, demanding low computational effort for an online operation and considering the system nonlinearities. <![CDATA[A Comparative Analysis of Selection Schemes in the Artificial Bee Colony Algorithm]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100055&lng=pt&nrm=iso&tlng=pt Abstract. The Artificial Bee Colony (ABC) algorithm is a popular swarm based algorithm inspired by the intelligent foraging behavior of honey bees. In the past, many swarm intelligence based techniques were introduced and proved their effective performance in solving various optimization problems. The exploitation of food sources is performed by onlooker bees in accordance with a proportional selection scheme that can be further modified to avoid such shortcomings as population diversity and premature convergence. In this paper, different selection schemes, namely, tournament selection, truncation selection, disruptive selection, linear dynamic scaling, linear ranking, sigma truncation, and exponential ranking have been used to analyze the performance of the ABC algorithm by testing on standard benchmark functions. From the simulation results, the schemes other than the standard ABC prove their efficient performance. <![CDATA[(Hyper)sequent Calculi for the ALC(S4) Description Logics]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100067&lng=pt&nrm=iso&tlng=pt Abstract Description logics (DL) form a well known family of knowledge representation languages. One of its main applications is on the Semantic Web as a reasoning framework in the form of the Ontology Web Language (OWL). In this paper, we propose a cut-free tree hypersequent calculus for terminological reasoning in the Description Logic ALC. We show the calculus is sound and complete. Also, an implementation is provided together with a complexity analysis. In addition, we also describe a cut-free sequent calculus for the description logic ALC with reflexive and transitive roles. Soundness and completeness are proven, and a complexity analysis and an implementation are also provided. <![CDATA[Trajectory Graphs Appearing from the Skein Problems at the Hypercube]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100081&lng=pt&nrm=iso&tlng=pt Abstract We formally state Skein Problems in Hamiltonian graphs and prove that they are reduced to the Independence Problem in Graph Theory. Skein problems can be widely used in cryptography, particularly, in protocols for message authentication or entities identification. Let G be a Hamiltonian graph. Given a Hamiltonian cycle H, let Π be a set of pairwise disjoint sub-paths within H, P 1 = v 11 , … , v m 1 , … , P k = [ v 1 k , … , v m k ] where m and k are two positive integers, then the pairs of extreme vertices V = { ( v 11 , v m 1 ) , … , ( v 1 k , v m k ) } are connected by the paths at Π without any crossing. Conversely, let us assume that the following problem is posed: given a collection of pairs V it is required to find a collection of pairwise disjoint paths, without any crossing, connecting each pair at V We reduce this last problem to the Independence Problem in Graph Theory. In particular, for the case of the n-dimensional hypercube, we show that the resulting translated instances are not Berge graphs, thus the most common polynomial-time algorithms to solve the translated problem do not apply. We have built a computing system to explicitly generate the resulting graphs of the reduction to the Independence problem. Nevertheless, due to the doubly exponential growth in terms of n of these graphs, the physical computational resources are quickly exhausted. <![CDATA[Social Network Analysis: a Practical Case Study Abstract]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100089&lng=pt&nrm=iso&tlng=pt Resumen Las redes sociales son consideradas como nuevos modos de socialización, a partir de ellas se puede tener una fuente de interacción entre las personas posibilitando la contextualización de fenómenos sociales entre los indivíduos y las relaciones inherentes que han surgido. Las diferentes herramientas computacionales junto con las métricas que brindan, sirven como base de conocimiento desde su aplicabilidad práctica sobre un tópico relevante como lo es el Análisis de Redes Sociales (ARS). Haciendo foco en Gephi como una herramienta de ARS, simple de usar y comprender en cuanto a métricas y visualizaciones, analizaremos un caso de estudio práctico en el ámbito educativo donde evaluaremos la dinámica de comunicación en un foro asíncrono dentro del contexto del ARS. Se evidenciará que las métricas del ARS y la visualización de la estructuración de los nodos y de las interacciones son una vía útil y potencialmente efectiva para analizar patrones de interacción en línea. Integrar esta aproximación del ARS al contexto educativo, es un medio que le permite al profesor detectar y diagnosticar el clima social e intervenir de acuerdo a los resultados obtenidos. Finalmente, determinaremos conclusiones acerca de esta metodología y su utilidad en este dominio, como así también las líneas de trabajo futuro.<hr/>Abstract Social networks are considered a new way of socialization; they can function as a source of interaction among people enabling contextualization of social phenomena between individuals and the emerged inherent relationships. Various computational tools along with the metrics they provide serve as a knowledge base while applied practically on such relevant topic as the Social Network Analysis (SNA). Focusing on Gephi as an SNA tool, simple to use and understand in terms of metrics and visualizations, we discuss a practical case study in education where we evaluate the dynamics of asynchronous communication in a forum within the context of SNA. It is evident that the SNA metrics and a display of the structure of the nodes and interactions are useful and potentially effective for an online analysis of interaction patterns. Integrating this SNA approach into the educational context will allow the teacher to detect and diagnose the social climate and act according to the results. Finally, we present conclusions concerning the methodology and its utility in this domain, as well as future lines of work. <![CDATA[Diagonal and Recursive Parameter Estimation for Black-box Systems with Bounded Inputs and Outputs]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100107&lng=pt&nrm=iso&tlng=pt Resumen La Teoría de la Estimación Estocástica se emplea para obtener información de la operación interna con respecto a la respuesta observable de un sistema tipo caja negra. Un problema por resolver es describir a los parámetros internos, a partir de un modelo de referencia. Se ha considerado que las dinámicas de los parámetros en un sistema estocástico está descrita por la relación de la varianza y covarianza de la señal observable. El método de los momentos de probabilidad permite obtener resultados que convergen a la respuesta deseada en un sentido de probabilidad. La estimación para sistemas MIMO (Multiple Input, Multiple Output) requiere del cálculo de la matriz pseudoinversa aunque se considere que es óptimo el modelo por el método del gradiente, al aplicar esa técnica se propone un vector propio y valores propios afines para la selección de los parámetros, haciendo que la estimación pierda gran parte de sus propiedades de convergencia. Esta artículo presenta el desarrollo de un estimador estocástico óptimo para un modelo de sistemas tipo caja negra con ruido en un espacio m-dimensional. Se describe un algoritmo para evaluar y construir la forma diagonal del sistema en un espacio de estados con el propósito de estimar las ganancias internas. Los resultados presentan una solución sin pérdida de generalidad de las características del modelo de referencia. La técnica de estimación usada se basa en el gradiente estocástico junto con la variable instrumental para eficientar su nivel de convergencia. Este tipo de matriz de contribución es óptima en un sentido de probabilidad. El algoritmo permite eliminar el cálculo de matrices pseudoinversas que tiene una complejidad computacional de orden no lineal. La propuesta de la matriz diagonal sugiere una menor complejidad que los métodos utilizados tradicionalmente, ya que es de orden lineal, O(j) donde j ∈ N, es la dimensión de la matriz. Los resultados muestran que es posible reconstruir la señal observable con una buena aproximación en un sentido de probabilidad, basado en la estimación por diagonales.<hr/>Abstract Estimation theory is a branch of stochastic and signal processing that deals with estimating the parameter values based on an observable known signal as a random variable. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the observable known signal. An estimator attempts to approximate the unknown parameters using the stochastic signal. In the estimation theory it is assumed that the output signal is random with the probability depending on the interest parameters. The estimation takes the measured observable signal as an input and produces an estimation of internal unknown gains. It is also preferable to derive an estimation that exhibits optimality, achieving minimum average error over some class, for example, an unbiased minimum variance as estimation. This paper presents the development of an optimal stochastic estimator for a black-box system in a m-dimensional space, observing noise with an unknown dynamics model. The results are described in a state space, with a discrete stochastic estimator and noise characterization. The results are obtained by an algorithm to construct the diagonal form for the state space system. Thus, the matrix is estimated in probability considering the distribution function. The estimation technique is used on the instrumental variable based on a gradient stochastic matrix. This kind of matrix contribution is optimal in the probability sense. This is a new technique for an instrumental variable tool, and a diagonalization process avoiding the calculation of pseudo-inverse matrices is presented with a linear computational complexity O(j) and j as the diagonal matrix dimension. The results show that it is possible to reconstruct the observable signal with a probability approximation. The advantages with respect to traditional solutions are focused on estimating the matrix contribution on line with a linear complexity. <![CDATA[New Perspectives on the Use of Spatial Filters in Magnetoencephalographic Array Processing]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100115&lng=pt&nrm=iso&tlng=pt Abstract The increase in computer power of the last few decades has allowed the resurgence of the theory behind spatial filtering (a.k.a. beamforming) and its application to array signal processing. That is the case of magnetoencephalographic (MEG) data, which relies on dense arrays of detectors in order to measure the brain activity non-invasively. In particular, spatial filters are used in MEG signal processing to estimate the magnitude and location of the current sources within the brain. This is achieved by calculating different beamformer-based indexes which usually involve a large computational complexity. Here, a new perspective on how today’s computers make it possible to handle such complexity is presented, up to the point when new and ever more complex neural activity indexes can be developed. Such is the case of indexes based on eigenspace projections and reduced-rank beamformers, whose applicability is shown in this paper for the case of using real MEG measurements and realistic models. <![CDATA[Semantic Approach to Context-Aware Resource Discovery over Scholarly Content Structured with OAI-PMH]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100127&lng=pt&nrm=iso&tlng=pt Resumen Esencial a la noción de Web es la idea de una comunidad abierta: cualquiera puede contribuir sus ideas al todo. Esta apertura, su dimensión y dinamismo imponen retos para el desarrollo de aplicaciones de descubrimiento de recursos para el quehacer educativo o de investigación. Sin embargo, se han dado muchos esfuerzos para organizar y estructurar la masa de datos. Los repositorios académicos han adoptado el Protocolo para Cosecha de Metadatos de la Iniciativa de Archivos Abiertos (OAI-PMH, por sus siglas en inglés) y los metadatos Dublin Core para la exposición de su información. Es así, que resulta relevante el desarrollo de tecnologías que abonen en el descubrimiento de recursos de interés tomando en cuenta las necesidades de información y contexto del usuario. El presente documento describe un enfoque que considera los recursos de información estructurados con OAI-PMH, una representación ontológica y el contexto del usuario como insumos de un marco de trabajo para la construcción de aplicaciones de recuperación de información.<hr/>Abstract Essential to the notion of the Web is the idea of an open community: anyone can contribute their ideas to the whole. This openness, the size and dynamism of the community impose challenges on the development of resource discovery applications for educational or research activities. On the other hand, there have been many efforts to organize and structure the mass of data. Scholarly repositories have adopted the Open Archives Initiative - the Protocol for Metadata Harvesting (OAI-PMH) and the Dublin Core metadata for displaying information. Thus, it is relevant to develop technologies in order to improve the discovering of resources taking into account the user information needs and the user context. This paper describes an approach which considers structured information resources with OAI-PMH, an ontological representation, and user context as inputs to a framework for building information retrieval applications. <![CDATA[Inferring Social Isolation in Older Adults through Ambient Intelligence and Social Networking Sites]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100143&lng=pt&nrm=iso&tlng=pt Abstract Early diagnosis of social isolation in older adults can prevent physical and cognitive impairment or further impoverishment of their social network. This diagnosis is usually performed by personal and periodic application of psychological assessment instruments. This situation encourages the development of novel approaches able to monitor risk situations in social interactions to obtain early diagnosis and implement appropriate measures. This paper presents the development of a prediction model of social isolation in older adults through Ambient Intelligence (AmI) and Social Networking Sites (SNSs). The predictive model has been evaluated in terms of its accuracy, sensitivity, specificity, predictive values. This paper also presents the results of an experimental test applying the proposed approach with real users, obtaining a prediction accuracy of 87% and a type II error rate of 15%. The proposed model will benefit institutions interested in developing technological solutions to detect early stages of social isolation, resulting in improving the quality of life of older adults. <![CDATA[Memory Binary Particle Swarm Optimization (MBPSO) Applied to a Spectrum Sharing Problem]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000100153&lng=pt&nrm=iso&tlng=pt Resumen Compartir espectro es una de las soluciones que permitirá que Redes Heterogéneas (HetNet), formada por diversas tecnologías de acceso inalámbricos, dispongan de recurso espectral adicional. Con esto se promueve la coexistencia entre los diferentes sistemas de radio, incrementando con ello su eficiencia espectral, pero también la interferencia en el sistema. En este trabajo se implementa una estrategia que controla la interferencia y asignación de canal en una HetNet, con el fin de maximizar la tasa de datos y la cantidad de usuarios que comparten espectro concurrentemente. Para resolver este problema se utilizó la técnica de Optimización Binaria por Cúmulo de Partículas con Memoria (MBPSO). A diferencia de la técnica optimización por Cúmulo de Partículas Socio-Cognitiva (SCPSO), MBPSO evita la convergencia prematura en óptimos locales y la perdida de diversidad. Los resultados muestran que al utilizar MBPSO se mejoran las soluciones sobre la capacidad del sistema que cuando se utiliza SCPSO. Además se compara el desempeno de MBPSO con las técnicas PSO Modificado (ModBPSO) y Optimización por Cúmulo de Partículas con Modulación Angular (AMPSO), ya que estas últimas también cuentan con la habilidad de explorar y explotar diversos espacios de soluciones.<hr/>Abstract Spectrum sharing is one of the solutions for Heterogeneous Networks (HetNets) for achieving additional spectral resource. The aim is to promote the coexistence of different radio systems in the same spectral portion increasing the spectral efficiency of the HetNet, but at teh same time the interference is increased. In this paper, we tackle with the problem of spectrum sharing in a HetNet composed of a macrocell and several femtocells. We propose a strategy, in which macrocell and femtocells can share simultaneously the available bandwidth while avoiding intra-tier interference. Our approach is formulated as a binary optimization problem. The fitness is evaluated considering techniques of binary optimization with memory to overcome the problem of premature convergence or loss of diversity that Socio-Cognitive Particle Swarm Optimization (SCPSO) presents. The results show that by using the Memory Binary Particle Swarm Optimization (MBPSO) algorithm, the system's capacity is improved in comparison with solutions obtained using SCPSO. Also the performance of MBPSO is compared with Angle Modulated PSO (AMPSO) and Modified BPSO (ModBPSO) algorithms.