Scielo RSS <![CDATA[Polibits]]> vol. num. 52 lang. es <![CDATA[SciELO Logo]]> <![CDATA[<b>Editorial</b>]]> <![CDATA[<b>A Completeness of Metrics for Topological Relations in 3D Qualitative Spatial Reasoning</b>]]> For qualitative spatial reasoning, there are various dimensions of objects. A considerable amount of effort has been devoted to 2D representation and analysis of spatial relations. Here we present an exposition for 3D objects. There are three types of binary relations between pairs of objects: topological connectivity, cardinal directions, and distance relations. The combinations of these relations can provide additional useful knowledge. The spatial databases include data and the spatial relations to facilitate end-user spatial querying, it also is important to associate natural language with these relations. Some work has been done in this regard for line-region and region-region topological relations in 2D, and very recent work has initiated the association between natural language, topology, and metrics for 3D objects. However, prior efforts have lacked rigorous analysis, expressive power, and completeness of the associated metrics. Herein we present a detailed study of new metrics required to bridge the gap between topological connectivity and size information for integrating reasoning in spatial databases. The complete set of metrics that we present should be useful for a variety of applications dealing with 3D objects including regions with vague boundaries. <![CDATA[<b>EMiner</b>: <b>A Tool for Selecting Classification Algorithms and Optimal Parameters</b>]]> In this paper, Genetic Algorithm (GA) is used to search for combinations of learning algorithms and associated parameters with maximum accuracy. An important feature of the approach is that the GA initial population is formed by using parameter values gathered from ExpDB (a public database of data mining experiments). The proposed approach was implemented in a tool called EMiner, built on top of a grid based software infrastructure for developing collaborative applications in medicine and healthcare domains (ECADeG project). Experiments on 16 datasets from the UCI repository were performed. The results obtained have shown that the strategy of combining the data from ExpDB via GA is effective in finding classification models with good accuracy. <![CDATA[<b>An Approach towards Semi-automated Biomedical Literature Curation and Enrichment for a Major Biological Database</b>]]> As part of a large-scale biocuration project, we are developing innovative techniques to process the biomedical literature and extract information relevant to specific biological investigations. Biological experts routinely extract core information from the scientific literature using a manual process known as scientific curation. The aim of our activity is to improve the efficiency of this process by leveraging upon natural language processing technologies in a text mining system. There are two lines of investigation that we pursue: (1) finding information relevant for curation and present it in an adaptive interface, and (2) use sentence-similarity techniques to create interlinks across articles in order to allow a process of knowledge discovery. <![CDATA[<b>Warnings and Recommendation System for an E-Learning Platform</b>]]> A warning messages and recommendation system for an E-Learning system is proposed, the goal is to identify which students are likely to have a poor academic performance, and give them timely feedback by showing alerts and recommended material. The proposed system uses a set of profiles previously identified by a student profiling model, using socio-economic (age and gender) and web navigation data on the system (number of accesses to resources, percentage of accesses in class, average absence time and average session length). Each profile is analyzed and a warning message is assigned to each one; also, the sequences of consultations performed by students with a high academic performance are recognized and used to choose which resources are recommended. Based on the sequence performed by a student in a current session, the platform may recommend access specific resources. <![CDATA[<b>Bi-variate Wavelet Autoregressive Model for Multi-step-ahead Forecasting of Fish Catches</b>]]> This paper proposes a hybrid multi-step-ahead forecasting model based on two stages to improve monthly pelagic fish-catch time-series modeling. In the first stage, the stationary wavelet transform is used to separate the raw time series into a high frequency (HF) component and a low frequency (LF) component, whereas the periodicities of each time series is obtained by using the Fourier power spectrum. In the second stage, both the HF and LF components are the inputs into a bi-variate autoregressive model to predict the original time series. We demonstrate the utility of the proposed forecasting model on monthly sardines catches time-series of the coastal zone of Chile for periods from January 1949 to December 2011. Empirical results obtained for 12-month ahead forecasting showed the effectiveness of the proposed hybrid forecasting strategy. <![CDATA[<b>Location Privacy-Aware Nearest-Neighbor Query with Complex Cloaked Regions</b>]]> The development of location-based services has spread over many aspects of modern social life. This development brings not only conveniences to users' daily life but also great concerns about users' location privacy. In such services, location privacy aware query processing that handles cloaked regions is becoming an essential part in preserving user privacy. However, the state-of-the-art cloaked-region-based query processors only focus on handling rectangular regions, while lacking an efficient and scalable algorithm for other complex region shapes. Motivated by that problem, we introduce enhancements and additional components to the location privacy aware nearest-neighbor query processor that provides efficient processing of complex polygonal and circular cloaked regions, namely the Vertices Reduction Paradigm and the Group Execution Agent. We also provide a new tuning parameter to achieve trade-off between answer optimality and system scalability. Experiments show that our query processing algorithm outperforms previous works, in terms of processing time and system scalability. <![CDATA[<b>Cipher Image Damage</b>: <b>An Application of Filters</b>]]> In this paper, color images are encrypted and subsequently damage occlusion is made to the encrypted figures, with different sizes; the intention is to simulate an attack. In this research, two aspects are discussed, namely: the first is to encrypt images with quality; that is, the figures encrypted pass randomness tests proposed in this paper. The second aspect deals with the problem of recovering the encrypted figure information when it has been damaged. To retrieve information from encrypted images, the enryption of images is carried out in two steps: in the first a permutation is applied to the entire image and the second uses the AES cryptosystem with variable permutations. To perform this task an algorithm is used that utilizes the pi number to generate the permutations. To improve the sharpness of the deciphered figures with damage two filters are applied; median and average. To measure the degree of improvement in the damaged images two tests are proposed; the first is the correlation coefficient between adjacent pixels in the horizontal, vertical and diagonal directions. The second is based on the information entropy. <![CDATA[<b>An Implementation of Propositional Logic Resolution Applying a Novel Specific Algebra</b>]]> This paper presents a methodology for evaluating propositional logic satisfiability using resolution-refutation. The method applies a strategy based on an algebra developed by the authors that estimates the possible outcomes of the expression and generates a logic value for refuting or accepting the satisfiability of the argument. <![CDATA[<b>Identification of Central Points in Road Networks using Betweenness Centrality Combined with Traffic Demand</b>]]> This paper aims to identify central points in road networks considering traffic demand. This is made with a variation of betweenness centrality. In this variation, the graph that corresponds to the road network is weighted according to the number of routes generated by the traffic demand. To test the proposed approach three networks have been created, which are Porto Alegre and Sioux Falls cities and a regular 10 x 10 grid. Then, trips were microscopically simulated and the results were compared with the proposed method. <![CDATA[<b>Project Scheduling</b>: <b>A Memetic Algorithm with Diversity-Adaptive Components that Optimizes the Effectiveness of Human Resources</b>]]> In this paper, a project scheduling problem is addressed. This problem supposes valuable assumptions about the effectiveness of human resources, and also considers a priority optimization objective for project managers. This objective is optimizing the effectiveness levels of the sets of human resources defined for the project activities. A memetic algorithm is proposed for solving the addressed problem. This memetic algorithm incorporates diversity-adaptive components into the framework of an evolutionary algorithm. The incorporation of these components is meant for improving the performance of the evolutionary-based search, in both exploitation and exploration. The performance of the memetic algorithm on instance sets with different complexity levels is compared with those of the heuristic search and optimization algorithms reported until now in the literature for the addressed problem. The results obtained from the performance comparison indicate that the memetic algorithm significantly outperforms the algorithms previously reported.