SciELO - Scientific Electronic Library Online

 
vol.17 número3Estrategia de procesamiento paralelo para la solución del problema térmico-mecánico acoplado aplicado a un sistema 4D utilizando el método de elemento finito índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Comp. y Sist. vol.17 no.3 Ciudad de México jul./sep. 2013

 

Editorial

 

High performance computing (HPC) has been developed for many years, coupling with various research and application trends such as grid computing, cloud computing, green computing, etc. It has become a key technology in research and development activities in many academic and industrial areas, especially in cases when the solution of large and complex problems copes with time constraints. Nowadays, the design, analysis, and evaluation of scientific applications for parallel and distributed computing systems are almost a commonplace. However, to reach high performance, many obstructions have to be overcome, whose origin lies within a broad spectrum of problem areas of parallel and distributed processing, resource optimization, and algorithms for high-performance computing.

The ISUM provides first class open forums for scientists in academia, engineers in industry and governments to present and discuss issues, trends and results which shape the high performance computing.

The scope of the conference includes, but is not limited to the following topics: cloud/grid computing systems, cyber-infrastructure, database applications and data mining, distributed system security, e-science, emerging technologies, high-performance scientific computing, HPC applications and new technologies, intelligent computing and neural networks, interconnection networks, modeling and simulation, parallel/distributed algorithms, reliability and fault-tolerance, resource allocation and management, scientific visualization, software tools and environments, task scheduling, wireless networks and mobile computing, among others.

At this forum, the scientific community presents technological tendencies and scientific applications serving as support to the body of knowledge in diverse disciplines. It provides a space to give continuity to inter-institutional collaborations.

With a track record of four consecutive years, ISUM has served as a successful forum to promote exchange of experiences, knowledge and research related to the use of high performance computing. It comprises plenary talks, lectures, workshops, technology expositions, poster presentations, and panel discussions. More than 300-500 participants attend the conference each year.

A number of recognized researchers from Europe, Latin America, Mexico, and United States attended the event and shared their work through keynote talks, conference presentations, and workshops. We mention academic keynote speakers in alphabetic order: Alexandre Tkatchenko (Fritz Haber Institute, Germany), Amit Majumdar (San Diego Supercomputer Center, USA), Andrei Tchernykh (CICESE Research Center, México), Behrooz Parhami (University of California, Santa Barbara, USA), Carl Kesselman (University of Southern California, USA), Carlos Jaime Barrios (Universidad Industrial de Santander in Bucaramanga, Colombia), Glenn Bresnahan (Boston University, USA), Ian Foster (Argonne National Laboratory, USA), Jack Dongarra (University of Tennessee, USA), Jaime Klapp (Instituto Nacional de Investigaciones Nucleares, México), Klaus Ecker (Ohio University, USA), Marc Snir (University of Illinois at Urbana-Champaign, USA), Mateo Valero (Barcelona Supercomputing Center, Spain), Michael Norman (San Diego Supercomputing Center, USA), Moisés Torres Martínez (University of Guadalajara, Mexico), Nicholas Cardo (Lawrence Berkeley National Laboratory, USA), Pablo Emilio Guillén Rondón (Univesidad de los Andes, Venezuela), Rajkumar Buyya (University of Melbourne, Australia), Ramin Yahyapour (GWDG – University of Göttingen, Germany), Richard Jorgensen (Cinvestav, Mexico), Rick Stevens (Argonne National Laboratory, USA), Thomas Sterling (Louisiana State University, USA), Uwe Schwiegelshohn (Dortmund University, Germany), Vassil Alexandrov (University of Reading, UK), William Thigpen (NASA Advanced Supercomputing Division, USA).

Some of technology keynote speakers are Padmanabhan Iyer (Intel), Bob Anderson (Cray), Claude Paquette (Cray), Boris Cortes Silva (NetApp/EMTEC), Joshua Mora (AMD), Bill Nitzberg (PBS), Frederick Reid (SGI), Ivan Tovar (SSC), and Rajesh Suckramani (IBM).

This event is held under coordination of the ISUM National Organizing Committee formed by various institutions: Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), Centro de Investigación en Matemáticas (CIMAT), Centro de Investigación y de Estudios Avanzados del IPN (CINVESTAV), Centro Nacional de Supercómputo (CNS), Corporación Universitaria para el Desarrollo de Internet (CUDI), Instituto Politécnico Nacional (IPN), Instituto Potosino de Investigación Científica y Tecnológica (IPICYT), Instituto Tecnológico de Monterrey (ITESM), Universidad Autónoma Metropolitana (UAM), Universidad de Colima (UCOL), Universidad de Guadalajara (UDG), Universidad de Guanajuato (UGTO), Universidad de Sonora (USON), Universidad Nacional Autónoma de México (UNAM), with support of CONACYT.

Furthermore, a set of leading technology companies such as AMD, CAPA4, CGG, Cisco Systems, Dell, EMC, Estratel, Global Computing, Hewlett Packard (HP), IBM, Intel Corporation, IQ Tech, Lufac, Microsoft, NVIDIA, PBS Works, Red Supercomputo, RMMC, SGI, SGrupo, CNS, SUN Microsystems, Telmex, WOLFRAM, etc. provided their support as sponsors to carry out this event.

ISUM 2014 is the next event in a series of highly successful International Supercomputing Conference in Mexico, previously held as ISUM-2010 (Guadalajara, Mexico, March 2010), ISUM 2011 (San Luis Potosí, Mexico, March 2011), ISUM 2012 (Guanajuato, Mexico, March 2012), ISUM 2013 (Manzanillo, Mexico, March 2013). ISUM 2014 will be held in Ensenada, Baja California, Mexico, March 2014.

Papers in the first group deal with a broad spectrum of implementation issues, mostly those related to usage of the Finite Element Method (FEM).

The paper by Cardoso-Nungaray et al. Parallel Processing Strategy for Solving the Thermal-Mechanical Coupled Problem Applied to a 4D System using the Finite Element Method proposes a strategy to solve the thermal-mechanical coupled problem using FEM. The authors simulate the deformation of a solid body due to internal forces provoked by temperature changes. They test the strategy on a car braking system, where a rotating disk velocity is decreased by friction. A quantitative analysis of the stress, strain and temperatures in some points of the geometry and qualitative analysis with visualizations of the simulation results are conducted.

The paper by González-García et al. Load Balancing for Parallel Computations with the Finite Element Method presents an overview of efforts to improve current techniques of load-balancing and efficiency of FEM computations on large-scale parallel machines. The authors introduce a multilevel load balancer to improve local load imbalance. They present the current state of the art in the area and show resource aware parallelization approaches which lead to scalable applications on more than thousands of heterogeneous processors.

The article by García-Blanquel et al. Parallel Adaptive Method for Selecting Points of Interest in Structures: Cranial Deformation presents a cranial deformation simulation. The authors apply a stress model using Finite Element Method, propose the boundary conditions according to the position of forces in a stereotactic frame fixed on the human head. They implement parallel image segmentation algorithm using PosixThread and MPI to build a three-dimensional cranium structure, and adaptive methods for reducing the computational cost and discretization errors. For the experiments they use 134 computed tomography images.

A discrete element method (DEM), as one of numerical methods, becomes widely accepted as an effective technique of addressing engineering problems. The article by Medel et al. Design and Optimization of Tunnel Boring Machines by Simulating the Cutting Rock Process using the Discrete Element Method proposes to use the DEM to build models that simulate the rock cutting process by a cutting disk and measure the forces-hard-rock interactions essential in the design of tunnel boring machines. The authors make a performance prediction of their design parameters which are critical in mechanical excavation.

The second group consists of four papers which deal with various issues related to the use of graphics processing units (GPUs), whose highly parallel structure makes them effective for algorithms, in which data processing can be performed in parallel.

The article by Lopresti et al. Solving Multiple Queries through the Permutation Index in GPU describes an algorithm of building a permutation index used to approximate similarity search on databases and solve many content queries at the same time. The authors evaluate the tradeoff between the answer quality and time performance of the implementation. The metric space model is used to model the similarity search problem. However, for a very large metric database, it is not enough to preprocess the dataset by building an index, it is also necessary to speed up the queries by using high performance computing.

The article by Romero-Vivas et al. Analysis of Genetic Expression with Microarrays using GPU Implemented Algorithms presents an implementation of algorithms using Compute Unified Device Architecture (CUDA) to determine statistical significance in the evaluation of gene expression for microarray hybridization experiments. The results were compared with respect to traditional sequential implementations, and showed speedup by a factor of around 5-30. DNA microarrays are used to simultaneously analyze the expression level of thousands of genes under multiple conditions. Massive amount of generated data makes its analysis a challenge, and an ideal candidate for massive parallel processing.

In the article by Rudomin et al. GPU Generation of Large Varied Animated Crowds, the authors discuss methods for simulating, generating, animating and rendering crowds and their behaviors. This type of data processing is complex and costly. However, the authors show that such systems can scale up almost linearly using GPU clusters and HPC.

The paper by García-Cano et al. A Parallel PSO Algorithm for a Watermarking Application on a GPU presents usability, advantages and disadvantages of using CUDA for implementing a particle swarm optimization algorithm. To test proposed solutions, some hide watermark image applications are used. The authors use the insertion/extraction algorithms and consider two objectives: fidelity and robustness using Mean Squared Error (MSE) and Normalized Correlation (NC). These optimization criteria are evaluated by means of Pareto dominance.

The paper by Gil-Costa et al. Suffix Array Performance Analysis for Multicore Platforms evaluates the performance achieved by a suffix array over a 32-core platform. Suffix arrays are efficient data structures for solving complex queries in a number of applications related to text databases, for instance, biological databases. The authors propose an optimization technique to improve the use of the cache memory by reducing the number of cache memory replacements performed each time a new query is processed.

Cloud computing is a type of Internet-based computing, where different services, such as computer infrastructure, software, applications and storage are accessed, shared as virtual resources, and delivered to low-cost consumer PC through the Internet. Dynamic resource allocation based on virtualization technology is analyzed in the paper Performance Evaluation of Infrastructure as a Service Clouds with SLA Constraints by Lezama-Barquet et al. The use of Service Level Agreements (SLAs) is a fundamentally new approach for job scheduling. With this approach, schedulers are based on satisfying QoS constraints. The main idea is to provide different levels of service, each addressing different set of customers. The authors developed a set of heuristics to allocate virtual resources to physical infrastructures in an efficient way. Algorithms are studied in the context of executing real workload traces available for HPC community. In order to provide performance comparison, a joint analysis of several metrics is performed. The authors show that the system can be dynamically adjusted in response to changes in the configuration and/or the workload. To this end, the past workload within a given time interval can be analyzed to determine appropriate parameters. The time interval for this adaptation depends on the dynamics of the workload characteristics and IaaS configuration.

In the article by González et al. A New LU Decomposition on Hybrid GPU-Accelerated Multicore Systems, the authors introduce a theorem on the decomposition of a matrix into determinants, and the new linear transformations. They show that linear equation systems can be simultaneously solved with these new linear transformations. The authors also propose a modified Doolittle-Gauss-LU-Decomposition in two versions: the first one applied to the matrix and the second to the augmented matrix. The first one is a new algorithm to compute determinants in exact form, and the second is a new LU elimination process to solve linear system of equations in parallel using multicore systems with GPU.

The last paper by Heras-Sánchez et al. Tridimensional-Temporal-Thematic Hydroclimate Modeling of Distributed Parameters for the San Miguel River Basin proposes a numerical analysis of diverse physiographic features and hydro-climate phenomena such as rainfall, temperature, soil-evaporation, and topography, among others The authors design a geographic database (GDB) which integrates data from multiple sources and develop strategies for structuring, identifying, handling and improving the GDB information. An iterative digitizing and geo-referencing process is formulated to match remotely collected hydro-climate data with a digital elevation model and thematic images. Digital maps, images, thematic information and spatially-referenced vector data are used together. Continuous data are represented by discrete data structures that fit to the mathematical models used to represent given physical phenomena. An increasing database size and model complexity causes difficulties to support problem-solving. High performance computing is used to overcome the computational intractability of large and complex spatial analysis.

These 12 papers cover a diverse range of technologies, modeling, simulation and computing paradigms which exemplify the unique nature of the ISUM event.

In addition, the issue contains the report on PhD thesis developed in other area: Assessment and prediction of water quality in shrimp culture, by J.J. Carbajal and L.P. Sánchez.

Editing this issue of the Research Journal "Computación y Sistemas" would not have been possible without the help of the ISUM National Organizing Committee and referees from different countries. We appreciate their careful and responsible work. We also thank all the authors for considering ISUM and this special issue as outlets to publish their research results in the area of supercomputing applications and technologies.

 

Andrei Tchernykh
(CICESE Research Center, Mexico)
René Luna-García
(CIC-IPN, Mexico)
Juan Manuel Ramírez-Alcaraz
(Colima University, Mexico)

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons