SciELO - Scientific Electronic Library Online

 
vol.20 número1Single-Camera Automatic Landmarking for People Recognition with an Ensemble of Regression TreesDiseño de un controlador de velocidad adaptativo para un MSIP utilizando inteligencia artificial índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Comp. y Sist. vol.20 no.1 Ciudad de México ene./mar. 2016

https://doi.org/10.13053/cys-20-1-2364 

Articles

A Parallel Tool for Numerical Approximation of 3D Electromagnetic Surveys in Geophysics

Octavio Castillo Reyes1 

Josep de la Puente1 

David Modesto1 

Vladimir Puzyrev1 

José María Cela1 

1Computer Applications in Science & Engineering, Barcelona Supercomputing Center, Barcelona, Spain octavio.castillo@bsc.es


Abstract

This paper presents an edge-based parallel code for the data computation that arises when applying one of the most popular electromagnetic methods in geophysics, namely, the controlled-source electromagnetic method (CSEM). The computational implementation is based on the linear Edge Finite Element Method in 3D isotropic domains because it has the ability to eliminate spurious solutions and is claimed to yield accurate results. The framework structure is able to exploit the embarrassingly-parallel tasks and the advantages of the geometric flexibility as well as to work with three different orientations for the dipole, or excitation source, on unstructured tetrahedral meshes in order to represent complex geological bodies through a local refinement technique. We demonstrate the performance and accuracy of our tool on the Marenostrum supercomputer (Barcelona Supercomputing Center) through scaling tests and canonical tests, respectively.

Keywords: Parallel computing; geophysics; edge-based finite element; CSEM; numerical solutions

1. Introduction

In the geophysics forward modeling context around the oil wells, the electric resistivity is a parameter that plays an important role. The Marine Controlled-Source Electromagnetic Method (CSEM) has emerged as a useful exploration technique for mapping offshore hydrocarbon reservoirs and characterizing gas hydrates bearing shallow sediments 4. In a standard configuration, the Marine CSEM uses a deep-towed horizontal electric dipole (HED) to transmit electromagnetic signals into the seawater and sediments below the mudline23.

An edge-based parallel code for numerical simulation of marine CSEM surveys in 3D isotropic structures is presented. In order to represent complex bodies with high fidelity we used unstructured tetrahedral meshes. The heart of our computational solution is based on the Edge Finite Element Method (EFEM) because it has the ability to eliminate spurious solutions and is claimed to yield accurate results. The framework structure is able to exploit the embarrassingly-parallel tasks, or tasks where there is no dependency (or communication) between those parallel tasks, and the advantages of the geometric flexibility as well as to work with three different orientations for the dipole (HED).

Marine CSEM response for a single HED at a single frequency requires a forward modeling whose computing can easily overwhelm single core and modest multi-core computing resources17. In fact, the actual execution of real-life scale simulations of electromagnetic geophysical problems requires using HPC because typical executions involve over 100,000 realizations, each dealing with several millions of degrees of freedom. To alleviate these issues, our parallel work-flow is focused on such edge tasks as the edges-elements array connectivity, the edge data computation (length, unit vector, local/global edge direction), physical properties at each edge (electric resistivity, primary electric field), and the electric field interpolation.

Regarding the computational burden, only six unknowns are required for each element (Nédélec tetrahedral elements of lower order). It is worth nothing that the linear vectorial Lagrange elements or any other consistently linear 3D-vector functions over a tetrahedral carry twelve unknowns, three at each of its four vertices. However, the state of the art is marked by a relative scarcity of robust edge-based codes to simulate these problems. This may be attributed to the fact that not all numerical approaches are well-suited for the latest computing architectures. For that reason, the software stack presented here was designed taking into account an architecture-aware approach.

We structure the paper as follows: in Section 2 we describe the background theory of marine CSEM. In Section 3 we present the formulation of electromagnetic (EM) field equations in isotropic domains. Parallel framework is described in Section 4. The performance and efficiency of the code are investigated using a 3D canonical model in Section 5. All experiments were performed on the Marenostrum supercomputer with two-8 cores Intel Xeon processors E52670 at 2.6 GHz per node. The last section is dedicated to conclusions.

2. Marine Controlled-source Electromagnetic Method

The Marine Controlled-source Electromagnetic Methods (CSEM) are a type of geophysical strategies to study the subsurface electrical conductivity distribution with an ample range of applications. CSEM techniques can be divided into two groups depending on the domain in which the collected data is interpreted: time domains (TDEM) or frequency domains (FDEM). In the case of oil prospecting, marine CSEM surveys are done predominantly using FDEM2,14.

In the marine CSEM, also referred to as seabed logging12, a deep-towed electric dipole transmitter is used to produce a low frequency EM signal (primary field) which interacts with the electrically conductive Earth and induces eddy currents that become sources of a new EM signal (secondary field). The two fields, the primary one and the secondary one, add up to a resultant field, which is measured by remote receivers placed on the seabed. Since the secondary field at low frequencies, for which displacement currents are negligible, depends primarily on the electric conductivity distribution of the ground, it is possible to detect thin resistive layers beneath the seabed by studying the received signal15. Operating frequencies of transmitters in CSEM may range between 0.1 and 10 Hz, and the choice depends on the model dimensions. In most studies, typical frequencies vary from 0.25 to 1 Hz, which means that for source-receiver offsets of 10-12 km, the penetration depth of the method can extend to several kilometers below the seabed1,4,6,15.

The disadvantage of the marine CSEM is its relatively low resolution compared to seismic imaging. Therefore, the marine CSEM is almost always used in conjunction with seismic surveying as the latter helps to constrain the resistivity model. Figure 1 depicts the marine CSEM which is nowadays a well-known geophysical prospecting tool in the offshore environment and a commonplace in industry; examples of that can be found in 9,10,18,14,22.

Fig. 1 Marine CSEM 

The Marine CSEM is a viable and cost-effective oil exploration technique. When integrated with other geophysics data, mainly, seismic information, CSEM surveys are promising for adding value in shallow/deep waters. The outcomes and analysis of modeling with CSEM produce a more robust understanding of the prospection.

3. Edge Finite Element Approximation

The 3D EM modeling requires solving diffusive Maxwell equations in a discretized form. The most popular numerical methods for EM forward modeling are Finite Difference (FD), Finite Element Method (FEM), and Integral Equation (IE). Among them, the FEM is more suitable for modeling EM response in complex geometries. However, for accurate computations, the divergence free condition for the EM fields in the source free regions needs to be addressed by an additional penalty term, commonly called Gauge condition, to alleviate possible spurious solutions13,15.

As a result, in FEM the use of Edge-based FEM (EFEM), also called Nédélec elements, has become very popular for solving EM fields problems. In fact, EFEM is often said to be a cure for many difficulties that are encountered (particularly eliminating spurious solutions) and is claimed to yield accurate results13,16,21. The basis functions of Nédélec elements are vectorial functions defined along the element edges. The tangential continuity of either electric or magnetic field is imposed automatically on the element interfaces while the normal components are still can be discontinuous . As a result, EFEM has the capability to model the frequency/time domain EM fields in inhomogeneous complex bodies at any resistivities contrasts and at any survey types. Therefore, our code is based on the Nédélec elements formulation by 7,8.

In geophysical applications, the low frequency EM field satisfies the following Maxwell’s equations:

 ×E= iωμ0H, (1)

×H=Js+σE, (2)

where we adopt the harmonic time dependence e-iωt, ω is the angular frequency, μ0 is the free space magnetic permeability, J s is the induced current in the conductive earth, and σ is the background conductivity. Actually, our formulation works for general isotropic domains.

In EM field formulations with FEM and EFEM and in order to capture the rapid change of the primary current, the anomalous formulations are desirable 5. In the anomalous field formulation the total field is decomposed into primary field (background) and secondary field 24:

E=Ep+ Es, (3)

σ= σp+ Δσ, (4)

Based on this formulation, one can derive the following equation for the secondary electric field:

××Es-iωμσEs=iωμΔσEp. (5)

In 5, the source term is the primary electric field, which is much smoother than the source current. In this sense, our formulation is able to work with three different orientations for the HED, which are given by 8.

Therefore, the primary field is calculated analytically using a horizontal layered-earth model and the secondary field is discretized by linear Nédélec elements. For this purpose, we first replace the following continuous condition:

EH(curl;Ω):×Ep=ψ, (6)

fixing the normal component (n^) of ×E in each point of the surface with the discrete condition:

Epi(×Ep)n^dS=Epiψn^dSEpiΩ, (7)

stating that 6 is satisfied on average on each face element of Ω. We end up in all cases with one or several relations linking the integral of ×Ep on a surface to the integral of a given function ψ. Applying Green’s theorem and making use of the fact that the line integral of Nédélec elements is one on edge and zero on the others 13, we find:

EΩ(×Ep)n^dS=ΩEptdl=iriEptdl=i±di, (8) (9) (10)

where r i are the edges of the boundary Ω and d i is the associated dofs. Finally, the following system of equations is obtained:

d0,i+icijdi=0, (11)

where the coefficients cij=±1 depending on the relative orientation of the edges and the contours ( n^), and the independent terms d 0,i are the integral of the electric field E through a face or a surface. In order to improve the accuracy, we used Gaussian quadrature points of different order to evaluate the integral (10).

Homogeneous Dirichlet boundary conditions are applied to the outer boundaries of the model. The EFEM discretization results in a linear equation system, which is solved using the iterative Quasi Minimal Residual Method (QMR) and the Biconjugate gradient Method (BCG) 3.

4. Framework

Despite the popularity of the EFEM, there are few implementations of it. Furthermore, the 3D modeling of geophysical EM problems can easily overwhelm single core and modest multi-core computing resources 17. In fact, the actual execution of real-life scale simulations of electromagnetic geophysical problems requires using HPC because typical executions involve over 100,000 realizations, each dealing with several millions of degrees of freedom. To alleviate these issues, our parallel framework is able to exploit the embarrassingly-parallel tasks, or tasks where there is no dependency (or communication) between those parallel tasks.

Figure 2 shows the software stack of our solution. Specific details and features of each module are as follows.

  1. Mesh. This module reads geometric and topological properties of an FEM mesh: how the elements are connected and where their nodes are located. Our implementation is able to read as input nodal-based meshes in three different formats: Netgen, Gambit, and Neutral format 7.

  2. Mesh refinement. To increase the solution accuracy, our framework uses a uniform refinement. In tetrahedral meshes, this approach results in 8 times more tetrahedral elements.

  3. Counterclockwise numbering. In order to get a consistent notation in the whole domain, this module sets the node numbering within each element in a counterclockwise direction.

  4. Edges computation. In EFEM formulations, the unknowns are associated to edges instead of the nodes. Because most of the FEM codes were developed for node-based formulations, it is necessary to develop a code to convert node numbering into edge numbering. Therefore, this module computes a matrix to represent every element by its edges and other matrix to describe every edge by its two nodes with dimensions (6 x TT) and (2 x TE), respectively, where TT is the number of elements and TE is the total number of edges. These matrices define the global/local edge direction in the mesh 7,13.

  5. Primary field computation. This module computes the primary field on each edge according to the formulation in 8. Furthermore, this module computes others edge values such as edge length and unit edge vector, which are critical for the interpolation stage, through the vector basis functions defined in 7,8.

  6. Sigma edges computacion. This module computes the sigma value for each edge. In the formulation of our geophysical application, this operation can be summarized by the following expression:

    SiE=j=1NSjeN,

    where SiE is the sigma value of i-th edge, N is the number of elements that share the i-th edge, and Sje is the prescribed value of sigma for the j-th element in the mesh.

  7. Assembly. This module assembles the system matrix whose general form is Ax = b. In electromagnetic simulations, and particularly in geophysical prospecting through EM such as CSEM, the matrix A is large, sparse, complex, and symmetric; the vector x contains the unknowns coefficients, and the vector b stores the contributions of the primary field. To exploit special properties of EFEM matrices, the parallel assembly process uses a Compressed Row Storage (CRS).

  8. Boundary conditions (BC). Before the system of equations is ready to be solved, the imposition of BC is needed. Actually, our code works with Dirichlet BC and their imposition is accomplished by setting 13

    bind(i)=v(i),Aind(i),ind(i)=1,Aind(i),j=0,bj=bj-Aj,ind(i)v(i),Aj,ind(i)=0,

    for j ≠ ind(i) , where ind(i) is a vector that stores the global edge indexes residing on the boundaries, and v(i) is a vector that contains the prescribed values of x. Different techniques are described in 9.

  9. Solver. In FEM or EFEM applications, the solvers are frequently iterative, but sometimes one may also want to use direct solvers. This module is able to work with two iterative solvers: BCG and QMR. Since the framework is based on an abstract data structure, it is possible to use other solvers with little effort.

  10. Interpolation. This module computes the electric response for an array of receivers. The interpolation process uses the vectorial functions defined in 7,8 because these automatically enforce the divergence free conditions for EM fields. Moreover, the continuity of the tangential EM is satisfied automatically.

  11. Output. Once a solution of EFEM problem on a given mesh has been obtained, it should be post-processed by using a visualization program. Our framework does not do the visualization by itself, but it generates output files (vtk format) with the final results. It also gives timing values in order to evaluate the performance.

Fig. 2 Software stack. Green dashed: pre-processing stage, red dashed: forward modeling, blue dashed: post-processing stage 

In order to meet the high computational cost of EFEM for EM fields in geophysical applications, actually our code is based on a shared memory parallel model defined by the OpenMP standard 20. OpenMP has been widely adopted in the scientific computing community, and most vendors support its Application Programming Interface (API) in their compiler suites. OpenMP offers not only parallel programs portability but, since it is based on directives, it also represents a simple way to maintain a single code for the serial and parallel version of an application.

To exploit the advantages of geometric flexibility, our parallel approach is focused on embarrassingly parallel tasks, or tasks where there is no dependency (or communication) between those parallel tasks. Namely, the minimum level of computing work is related to the edges in the mesh. Examples of embarrassingly parallel modules are computation of edges, primary field computation, and sigma edges computation. Another parallel task is interpolation, the only difference lies in the parallelism level because it works over the number of receivers (points) instead of the number of edges.

A detailed description of the main algorithms of our code can be found in 8.

5. Results

To verify the accuracy and performance of our modeling, we used the model defined in Figure 3. Our code is able to work with three different dipole’s orientations (x-oriented, y-oriented and z-oriented) according to the formulation in 7),(8. This source transmits a carefully designed low-frequency EM signal into the subsurface. The main physical parameters for our test are described in Table 1.

Fig. 3 Layer model (2D slice) 

Table 1 Main physical parameters 

The experiments were performed on the Marenostrum supercomputer with two-8 cores Intel Xeon processors E52670 at 2.6 GHz per node. To increase the solution accuracy, our implementation used a non-uniform refinement.

Table 2 summarizes the results of our tests. For each experiment, the problem size stays fixed but the number of processing units is increased (the strong scaling approach). The parallel efficiency is given by χ=S/(nSn)100, where S is the amount of time to complete a work unit with 1 processing unit, n is the number of processing units, and S n is the amount of time to complete the same unit of work with n processing units. In Table 2 the time is given in seconds.

Table 2 Timers: summary of results 

From the results in Table 2 it is easy to see that in our experiments the minimum execution time is not limited by the communication overhead, as a result, we achieved a quasi linear speed-up. The latter issue is critical because if the computation time in each processor is smaller than the communication time, the speed-up can saturate. Table 2 also shows the total number of tetrahedral elements (TT) and the total number of edges (TE), which is a measure of required storage space during run-time. TT and TE were determined by successively refined meshes.

In order to validate our numerical formulation, the components of Ee obtained from equation 3 versus components of Eh obtained by EFEM are shown in Figure 4. For the sake of clarity, Figure 4 only includes the results of Test 5 for an x-directed dipole. It is easy to see that our approximation converges to the desired solution when the number of dofs grows ( TT5.5 with TE6.5 for Test 5).

Fig. 4 Total electric field components. Comparison between solution Eh (edge elements) from Test 5 and the exact solution Ee (analytic) 

In Table 3 we show the errors for the components of Eh. Following the ideas of 11, the errors of the numerical solution Eh with respect to the exact solution Ee obtained from equation 5 are measured in L1-norm, L2-norm, and L-norm.

Table 3 Errors in L1-norm, L2-norm, and L-norm for solution Eh with respect to the exact solution Ee 

Errors in Table 3 demonstrate that edge elements of lower-order reach the desired accuracy when the number of edges is increased (TE or dofs in the mesh).

6. Conclusions

The electromagnetic methods are an established tool in geophysics, finding application in many areas such as hydrocarbon and mineral exploration, reservoir monitoring, CO2 storage characterization, geothermal reservoir imaging, and many others. In particular, the marine CSEM has become an important technique for reducing ambiguities in data interpretation in hydrocarbon exploration.

Considering the societal value of exploration geophysics, we presented an edge-based parallel code for the forward modeling of the marine CSEM in 3D isotropic structures. The framework is based on unstructured tetrahedral meshes because these have the ability to represent complex bodies with high fidelity. The heart of our computational solution is based on EFEM because it can eliminate spurious solutions and is claimed to yield accurate results.

Recent trends in parallel computing techniques were investigated for their use in mitigating the computational overburden associated with the electromagnetic modeling. Therefore, our parallel work-flow is focused on such edge tasks as edges-elements array connectivity, edge data computation (length, unit vector, local/global edge direction), physical properties at each edge (electric resistivity, primary electric field), matrix assembly, and the electric field interpolation. As a result, we obtained a parallel framework whose main modules are flexible and simple.

Concerning the computational burden, only six unknowns are required for each element (Nédélec tetrahedral elements of lower order). It is worth noting that the linear vectorial Lagrange elements or any other consistently linear 3D-vector functions over a tetrahedral carry twelve unknowns, three at each of its four vertices. In addition, the software stack presented here was designed taking into account an architecture-aware approach.

The efficiency and accuracy of the code were evaluated through scalability tests (strong scaling) and error-norms for different mesh sizes. The results show not only a good parallel efficiency of our code but also an acceptable accuracy in the numerical approximation.

All the experiments were performed on the Marenostrum supercomputer at the Barcelona Supercomputing Center (www.bsc.es).

Future work will be aimed at the implementation of the anisotropy cases and at application of MPI communications which are needed to use distributed memory platforms.

Acknowledgements

This project received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 644602.

Authors gratefully acknowledge the support from the Mexican National Council for Science and Technology (CONACyT).

This work benefitted from helpful suggestions of Hélène Barucq and Julien Diaz from MAGIQUE-3D team at Institut National de Recherche en Informatique et en Automatique (INRIA) and two anonymous reviewers.

References

1. Abubakar, A., Habashy, T., Druskin, V., Alumbaugh, D., Zerelli, A., & Knizhnerman, L. (2006). Two-and-half-dimensional forward and inverse modeling for marine CSEM problems. SEG Annual Meeting, Society of Exploration Geophysicists. [ Links ]

2. A.V., G. (2013). Three-dimensional controlledsource electromagnetic inversion using modern computational concepts. Ph.D. thesis, Freie Universität Berlin. [ Links ]

3. Basermann, A. (1995). Parallel sparse matrix computations in iterative solvers on distributed memory machines. PPSC, pp. 454-459. [ Links ]

4. Boulaenko, M., Hesthammer, J., Vereshagin, A., Gelting, P., Davies, R., & Wedberg, T. (2007). Marine CSEM technology-the luva case. Houston Geological Society. [ Links ]

5. Cai, H., Xiong, B., Han, M., & Zhdanov, M. (2014). 3D controlled-source electromagnetic modeling in anisotropic medium using edge-based finite element method. Computers & Geosciences, Vol. 73, pp. 164-176. [ Links ]

6. Carazzone, J., Burtz, O., Green, K., Pavlov, D., & Xia, C. (2005). Three dimensional imaging of marine csem data. SEG Annual Meeting, Society of Exploration Geophysicists. [ Links ]

7. Castillo, O., de la Puente, J., Puzyrev, V., & Cela, J. (2015). Assessment of edge-based finite element technique for geophysical electromagnetic problems: efficiency, accuracy and reliability. Proceedings of the 1st Pan-American Congress on Computational Mechanics and XI Argentine Congress on Computational Mechanics, CIMNE, Buenos Aires, Argentina, pp. 984-995. [ Links ]

8. Castillo, O., de la Puente, J., Puzyrev, V., & Cela, J. (2015). Edge-based electric field formulation in 3D CSEM simulations: a parallel approach. Proceedings of the 6th International Conference and Workshop on Computing and Communication, IEEE, Vancouver, Canada. [ Links ]

9. Commer, M. & Newman, G. (2008). New advances in three-dimensional controlled-source electromagnetic inversion. Geophysical Journal International, Vol. 172, No. 2, pp. 513-535. [ Links ]

10. Constable, S. (2010). Ten years of marine CSEM for hydrocarbon exploration. Geophysics, Vol. 75, No. 5, pp. 75A67-75A81. [ Links ]

11. de la Puente, J., Käser, M., Dumbser, M., & Igel, H. (2007). An arbitrary high-order discontinuous Galerkin method for elastic waves on unstructured meshes-IV. Anisotropy. Geophysical Journal International, Vol. 169, No. 3, pp. 1210-1228. [ Links ]

12. Eidesmo, T., Ellingsrud, S., MacGregor, L., Constable, S., Sinha, M., Johansen, S., Kong, F., & Westerdahl, H. (2002). Sea bed logging (sbl), a new method for remote and direct identification of hydrocarbon filled layers in deepwater areas. First break, Vol. 20, No. 3, pp. 144-152. [ Links ]

13. Jin, J. (2002). The Finite Element Method in Electromagnetics. Wiley, New York, second edition. [ Links ]

14. Key, K. (2012). Marine electromagnetic studies of seafloor resources and tectonics. Surveys in geophysics, Vol. 33, No. 1, pp. 135-167. [ Links ]

15. Koldan, J., Puzyrev, V., de la Puente, J., Houzeaux, G., & Cela, J. (2014). Algebraic multigrid preconditioning within parallel finite-element solvers for 3-D electromagnetic modelling problems in geophysics [accepted]. Geophysical J. Int. [ Links ]

16. Nédélec, J.-C. (1980). Mixed finite elements in R3. Numerische Mathematik, Vol. 35, No. 3, pp. 315-341. [ Links ]

17. Newman, G. (2014). A review of high-performance computational strategies for modeling and imaging of electromagnetic induction data. Surveys in Geophysics, Vol. 35, No. 1, pp. 85-100. [ Links ]

18. Newman, G., Commer, M., & Carazzone, J. (2010). Imaging CSEM data in the presence of electrical anisotropy. Geophysics, Vol. 75, No. 2, pp. F51-F61. [ Links ]

19. Nguyen, T. (2006). Finite Element Methods: Parallel-Sparse Statics and Eigen-Solutions. Springer. [ Links ]

20. OpenMP Architecture Review Board (2015). OpenMP application program interface. [ Links ]

21. Rognes, M., Kirby, R., & Logg, A. (2009). Efficient assembly of H(div) and H(curl) conforming finite elements. SIAM Journal on Scientific Computing, Vol. 31, No. 6, pp. 4130-4151. [ Links ]

22. Weiss, C. & Newman, G. (2002). Electromagnetic induction in a fully 3-D anisotropic earth. Geophysics, Vol. 67, No. 4, pp. 1104-1114. [ Links ]

23. Ying, L. & Yuguo, L. (2012). A parallel finite element approach for 2.5 D marine controlled source electromagnetic modeling. Geophysics, Vol. 72, No. 2, pp. WA51-WA62. [ Links ]

24. Zhdanov, M. (2009). Geophysical electromagnetic theory and methods, volume 43. Elsevier. [ Links ]

Octavio Castillo Reyes received his bachelor degree from the Xalapa Institute of Technology, Mexico. He is currently studying PhD in Computer Architecture at the Polytechnic University of Catalonia (Barcelona, Spain). He develops his research work in the Department of Computer Applications in Science & Engineering (CASE) of the Barcelona Supercomputing Center - National Supercomputing Center (BSC-CNS).

Josep de la Puente is a senior researcher at BSC, leader of the geophysical efforts at the Repsol-BSC Research Center, and expert on Discontinuous Galerkin methods for computational seismology using HPC platforms.

David Modesto is a postdoc junior researcher at BSC. David Modesto received his PhD on Civil Engineering with honors from the Polytechnic University of Catalonia. His main research topics are real-time generation of numerical solutions for wave agitation problems in harbors, reduced order algorithms with user friendly interactions, acceleration of processes related to coastal applications, and development of new harbor models.

Vladimir Puzyrev is a senior researcher at BSC who received awards from the National Academy of Sciences of Ukraine for his scientific work, and is currently developing massively parallel modelling and inversion algorithms for on surface resistivity measurements, namely, controlled-source electromagnetics and magnetotellurics.

Jose Maria Cela received his PhD in Telecommunications Engineering from the Polytechnic University of Catalonia. He is associate professor in the Polytechnic University of Catalonia. Currently he is the director of the Department of Computer Applications in Science & Engineering (CASE) of the Barcelona Supercomputing Center - National Supercomputing Center (BSC-CNS).

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License