Scielo RSS <![CDATA[Polibits]]> http://www.scielo.org.mx/rss.php?pid=1870-904420160001&lang=pt vol. num. 53 lang. pt <![CDATA[SciELO Logo]]> http://www.scielo.org.mx/img/en/fbpelogp.gif http://www.scielo.org.mx <![CDATA[Editorial]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100003&lng=pt&nrm=iso&tlng=pt <![CDATA[On the Sufficiency of Using Degree Sequence of the Vertices to Generate Random Networks Corresponding to Real-World Networks]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100005&lng=pt&nrm=iso&tlng=pt Abstract The focus of research in this paper is to investigate whether a random network whose degree sequence of the vertices is the same as the degree sequence of the vertices in a real-world network would exhibit values for other analysis metrics similar to those of the real-world network. We use the well-known Configuration Model to generate a random network on the basis of the degree sequence of the vertices in a real-world network wherein the degree sequence need not be Poisson-style. The extent of similarity between the vertices of the random network and real-world network with respect to a particular metric is evaluated in the form of the correlation coefficient of the values of the vertices for the metric. We involve a total of 24 real-world networks in this study, with the spectral radius ratio for node degree (measure of variation in node degree) ranging from 1.04 to 3.0 (i.e., from random networks to scale-free networks). We consider a suite of seven node-level metrics and three network-level metrics for our analysis and identify the metrics for which the degree sequence would be just sufficient to generate random networks that have a very strong correlation (correlation coefficient of 0.8 or above) with that of the vertices in the corresponding real-world networks. <![CDATA[Adjustment of Wavelet Filters for Image Compression Using Artificial Intelligence]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100023&lng=pt&nrm=iso&tlng=pt Resumen: Se presenta un método para compresión de imágenes sin pérdidas, usando la transformada wavelet lifting con ajuste automático de coeficientes de filtros wavelets para una mayor compresión sin pérdidas. La propuesta se basa en reconocimiento de patrones utilizando clasificador 1-NN. Utilizando el reconocimiento de patrones se optimizan los coeficientes de los filtros lifting de manera global para cada imagen. La técnica propuesta fue aplicada para la compresión de imágenes de prueba y comparada con los filtros wavelets estándares CDF (2,2) y CDF (4,4), obteniendo resultados mejores en relación a la entropía obtenida para cada imágenes, así, como para el promedio general.<hr/>Abstract: A method for lossless image compression using wavelet lifting transform with automatic adjustment of wavelet filter coefficients for better compression is presented. The proposal is based on pattern recognition by 1-NN classifier. Using the pattern recognition, the lifting filter coefficients are optimized globally for each image. The proposed technique was aplied to test images and the compression results were compared to the results produced by the standard CDF (2,2) and CDF (4,4) wavelet filters. The results obtained with the optimized wavelet filters are better in terms of the achieved entropy for all images, as well as for the overall performance. <![CDATA[Data Reduction and Regression Using Principal Component Analysis in Qualitative Spatial Reasoning and Health Informatics]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100031&lng=pt&nrm=iso&tlng=pt Abstract The central idea of principal component analysis (PCA) is to reduce the dimensionality of a dataset consisting of a large number of interrelated variables, while retaining as much as possible of the variation present in the dataset. In this paper, we use PCA based algorithms in two diverse genres, qualitative spatial reasoning (QSR) to achieve lossless data reduction and health informatics to achieve data reduction along with improved regression analysis respectively. In an adaptive hybrid approach, we have employed PCA to traditional regression algorithms to improve their performance and representation. This yields prediction models that have both a better fit and reduced number of attributes than those produced by using standard logistic regression alone. We present examples using both synthetic data and real health datasets from UCI Repository. <![CDATA[A Segment-based Weighting Technique for URL-based Genre Classification of Web Pages]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100043&lng=pt&nrm=iso&tlng=pt Abstract: We propose a segment-based weighting technique for genre classification of web pages. This technique exploits character n-grams extracted from the URL of the web page rather than its textual content. The main idea of our technique is to segment the URL and assigns a weight for each segment. Experiments conducted on three known genre datasets show that our method achieves encouraging results. <![CDATA[Improving Corpus Annotation Quality Using Word Embedding Models]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100049&lng=pt&nrm=iso&tlng=pt Abstract Web-crawled corpora contain a significant amount of noise. Automatic corpus annotation tools introduce even more noise performing erroneous language identification or encoding detection, introducing tokenization and lemmatization errors and adding erroneous tags or analyses to the original words. Our goal with the methods presented in this article was to use word embedding models to reveal such errors and to provide correction procedures. The evaluation focuses on analyzing and validating noun compounds identifying bogus compound analyses, recognizing and concatenating fragmented words, detecting erroneously encoded text, restoring accents and handling the combination of these errors in a Hungarian web-crawled corpus. <![CDATA[A Method Based on Patterns for Deriving Key Performance Indicators from Organizational Objectives]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100055&lng=pt&nrm=iso&tlng=pt Abstract: Organizational strategic alignment implies consistency among the organizational elements. Organizational objectives act as essential elements for leading such alignment. In addition, key performance indicators (KPIs) have demonstrated usefulness for assisting the strategic alignment allowing for a holistic control of the organization. Some approaches emphasizing objective-KPI relationships have been proposed; however, they lack of a fully appropriate method for treating organizational objectives, KPIs, and objective-KPI relationships. They exhibit some drawbacks in terms of ambiguity, stakeholder understandability, and subjectivity. In this paper, we propose a method for overcoming such drawbacks, by using pre-conceptual-schema-based organizational patterns as a way to operationalize organizational objectives in terms of the KPIs. So, a systematic method for deriving a set of candidate KPIs from a specific organizational objective is provided. In addition, we present a lab study in order to illustrate the main aspects of this proposal. <![CDATA[Optimizing Data Processing Service Compositions Using SLA's]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100065&lng=pt&nrm=iso&tlng=pt Abstract: This paper proposes an approach for optimally accessing data by coordinating services according to Service Level Agreements (SLA) for answering queries. We assume that services produce spatio-temporal data through Application Programming Interfaces (API's). Services produce data periodically and in batch. Assuming that there is no full-fledged DBMS providing data management functions, query evaluation (continuous, recurrent or batch) is done through reliable service coordinations guided by SLAs. Service coordinations are optimized for reducing economic, energy and time costs. <![CDATA[Recommendation for an Enterprise Content Management (ECM) Based on Ontological Models]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100077&lng=pt&nrm=iso&tlng=pt Abstract: This paper presents a system of recommendations for an enterprise content manager (ECM) based on ontological models. In many occasions the results of a search are not accurate enough, so the user of the ECM system must check them and discard those not related to the search. In order to make recommendations, a proposal where it is necessary to review the instances of the ontological model is presented to manage the alias and ambiguities. Comparisons are made between the results obtained from the traditional search model and the recommendations suggested by the model proposed in this work. <![CDATA[A Memetic Algorithm Applied to the Optimal Design of a Planar Mechanism for Trajectory Tracking]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100083&lng=pt&nrm=iso&tlng=pt Abstract: Memetic algorithms (MA), explored in recent literature, are hybrid metaheuristics formed by the synergistic combination of a population-based global search technique with one or more local search algorithms, which in turn can be exact or stochastic methods. Different versions of MAs have been developed, and although their use was focused originally on combinatorial optimization, nowadays there are memetic developments to solve a wide selection of numerical type problems: with or without constraints, mono or multi objective, static or dynamic, among others. This paper presents the design and application of a novel memetic algorithm, MemMABC, tested in a case study for optimizing the synthesis of a four-bar mechanism that follows a specific linear trajectory. The proposed method is based on the MABC algorithm as a global searcher, with the addition of a modified Random Walk as a local searcher. MABC is a modified version of the Artificial Bee Colony algorithm, adapted to handle design constraints by implementing the feasibility rules of Deb. Four-bar mechanisms are a good example of hard optimization problems, since they are used in a wide variety of industrial applications; simulation results show a high-precision control of the proposed trajectory for the designed mechanism, thus demonstrating that MemMABC can be applied successfully as a tool for solving real-world optimization cases. <![CDATA[An Efficient Iterated Greedy Algorithm for the Makespan Blocking Flow Shop Scheduling Problem]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442016000100091&lng=pt&nrm=iso&tlng=pt Abstract: We propose in this paper a Blocking Iterated Greedy algorithm (BIG) which makes an adjustment between two relevant destruction and construction stages to solve the blocking flow shop scheduling problem and minimize the maximum completion time (makespan). The greedy algorithm starts from an initial solution generated based on some well-known heuristic. Then, solutions are enhanced till some stopping condition and through the above mentioned stages. The effectiveness and efficiency of the proposed technique are deduced from all the experimental results obtained on both small randomly generated instances and on Taillard's benchmark in comparison with state-of-the-art methods.