SciELO - Scientific Electronic Library Online

 
vol.10 número2Design of a Qubit and a Decoder in Quantum Computing Based on a Spin Field EffectDesign and Implementation of an Adjustable Speed Drive for Motion Control Applications índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Journal of applied research and technology

versión On-line ISSN 2448-6736versión impresa ISSN 1665-6423

J. appl. res. technol vol.10 no.2 Ciudad de México abr. 2012

 

Parallel Approach for Time Series Analysis with General Regression Neural Networks

 

J.C. Cuevas-Tello*1, R.A. González-Grimaldo1, O. Rodríguez-González1, H.G. Pérez-González1, O. Vital-Ochoa1

 

1 Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, Av. Dr. Manuel Nava No.8, Zona Universitaria, 78290 San Luis Potosí, SLP, México. *cuevas@uaslp.mx.

 

Abstract

The accuracy on time delay estimation given pairs of irregularly sampled time series is of great relevance in astrophysics. However the computational time is also important because the study of large data sets is needed. Besides introducing a new approach for time delay estimation, this paper presents a parallel approach to obtain a fast algorithm for time delay estimation. The neural network architecture that we use is general Regression Neural Network (GRNN). For the parallel approach, we use Message Passing Interface (MPI) on a beowulf-type cluster and on a Cray supercomputer and we also use the Compute Unified Device Architecture (CUDA™) language on Graphics Processing Units (GPUs). We demonstrate that, with our approach, fast algorithms can be obtained for time delay estimation on large data sets with the same accuracy as state-of-the-art methods.

Keywords: neural networks, time series, parallel algorithms, machine learning.

 

Resumen

La precisión para estimar retrasos en tiempo en series de tiempo muestreadas irregularmente es de gran importancia en astrofísica. Sin embargo, el tiempo computacional también es importante para el estudio de conjuntos de datos de gran tamaño. Este artículo primero presenta un nuevo método para estimar retrasos en tiempo, posteriormente se presenta una metodología basada en cómputo paralelo para estimar de manera rápida retrasos en tiempo. En ambos casos se utiliza una arquitectura de redes neuronales denominada regresión generalizada (General Regression Neural Networks — GRNN). Para el cómputo paralelo se utiliza MPI (Message Passing Interface) en un cluster tipo beowulf y en una supercomputadora Cray, también se utiliza el lenguaje CUDA™) (Compute Unified Device Architecture) para GPUs (Graphics Processing Units). Finalmente se demuestra empíricamente que con nuestra metodología se obtienen algoritmos rápidos para estimar retrasos en tiempo en conjuntos de datos de gran tamaño con la misma precisión que métodos que se usan en la actualidad.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

References

[1] Saha, P.: Gravitational Lensing. Encyclopedia of Astronomy and Astrophysics (2000).         [ Links ]

[2] Sanguinetti, G., Lawrence, N.: Missing data in kernel pca. In: Machine Learning: ECML 2006, Lecture Notes in Artificial Intelligence (LNAI 4212), pp. 751-758. Springer-Verlag (2006).         [ Links ]

[3] Bridewell, W., Langley, P., Racunas, S., Borrett, S.: Learning process models with missing data. In: Machine Learning: ECML 2006, Lecture Notes in Artificial Intelligence (LNAI 4212), pp. 557-565. Springer-Verlag (2006).         [ Links ]

[4] Kundic, T., Turner, E., Colley, W., Gott-III, J., Rhoads, J., Wang, Y., Bergeron, L., Gloria, K., Long, D., Malhorta, S., Wambsganss, J.: A robust determination of the time delay in 0957+561A, B and a measurement of the global value of Hubble's constant. Astrophysical Journal 482(1), 75-82 (1997).         [ Links ]

[5] Refsdal, S.: On the possibility of determining Hubble's parameter and the masses of galaxies from the gravitational lens effect. Monthly Notices of The Royal Astronomical Society 128, 307-+ (1964).         [ Links ]

[6] Dobler, G., Keeton, C.R.: Microlensing of Lensed Supernovae. Astrophysical Journal 653, 1391-1399 (2006). DOI 10.1086/508769.         [ Links ]

[7] Paraficz, D., Hjorth, J., Burud, I., Jakobsson, P., Elíasdóttir, Á.: Microlensing variability in time-delay quasars. Astronomy and Astrophysics 455, L1-l4 (2006). DOI 10.1051/0004-6361:20065502.         [ Links ]

[8] Cuevas-Tello, J., Tiňo, P., Raychaudhury, S.: How accurate are the time delay estimates in gravitational lensing? Astronomy and Astrophysics 454, 695-706 (2006).         [ Links ]

[9] Howlett, R., Jain, L. (eds.): Radial Basis Function Networks 2: new advances in design. Physica-Verlag (2001).         [ Links ]

[10] Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall (1999).         [ Links ]

[11] Gonzalez-Grimaldo, R., Cuevas-Tello, J.: Analysis of time series with neural networks. In: Proceedings of 7th Mexican International Conference on Artificial Intelligence (MICAI), pp. 131-137. IEEE Computer Society (2008).         [ Links ]

[12] Pelt, J., Hjorth, J., Refsdal, S., Schild, R., Stabell, R.: Estimatio nofmultipletimedelays in complex gravitational lens systems. Astronomy and Astrophysics 337(3), 681-684 (1998).         [ Links ]

[13] Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes in C++: The Art of Scientific Computing, second edn. Cambridge University Press (2002).         [ Links ]

[14] Cuevas-Tello, J.: Estimating time delays between irregularly sampled time series. Ph.D. thesis, School of Computer Science, University of Birmingham (2007). http://etheses.bham.ac.uk/88/.         [ Links ]

[15] Cuevas-Tello, J., Tiňo, P., Raychaudhury, S., Yao, X., Harva, M.: Uncovering delayed patterns in noisy and irregularly sampled time series: an astronomy application. Pattern Recognition 3(43), 1165-1179 (2010).         [ Links ]

[16] Harva, M., Raychaudhury, S.: Bayesian estimation of time delay between unevenly samplead signals. In: IEEE International Workshop on Machine Learning for Signal Processing, pp. 111-122. IEEE (2006).         [ Links ]

[17] Press, W., Rybicki, G., Hewitt, J.: The time delay of gravitational lens 0957+561, I. Methodology and analysis of optical photometric data. Astrophysical Journal 385(1), 404-415 (1992).         [ Links ]

[18] Schild, R., Thomson, D.: The Q0957+561 time delay from optical data. Astronomical Journal 113(1), 130-135 (1997).         [ Links ]

[19] Oguri, M.: Gravitational Lens Time Delays: A Statistical Assessment of Lens Model Dependences and Implications for the Global Hubble Constant. The Astrophysical Journal 660, 1-15 (2007). DOI 10.1086/513093.         [ Links ]

[20] Cuevas-Tello, J., Tiňo, P., Raychaudhury, S.: A kernel-based approach to estimating phase shifts between irregularly sampled time series: An application to gravitational lenses. In: Machine Learning: ECML 2006, Lecture Notes in Artificial Intelligence (LNAI 4212), pp. 614-621. Springer-Verlag (2006).         [ Links ]

[21] Harva, M., Raychaudhury, S.: Bayesian estimation of time delays between unevenly sampled signals. Neurocomputing 72(1-3), 32-38 (2008).         [ Links ]

[22] Specht, D.: A general regression neural network. IEEE Transactions on Neural Networks, 2(6), 568-576 (1991).         [ Links ]

[23] Cristianini, N., Shawe-Taylor, J.: Support Vector Machines and other kernel-based learning methods. Cambridge University Press (2000).         [ Links ]

[24] Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press (2004).         [ Links ]

[25] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer (2001).         [ Links ]

[26] Golub, G., Loan, C.V.: Matrix Computations, second ed. The Johns Hopkins University Press (1989).         [ Links ]

[27] Cowan, G.: Statistical Data Analysis. Oxford University Press (1998).         [ Links ]

[28] Courrieu, P.: Fast computation of moore-penrose inverse matrices. Neural Information Processing - Letters and Reviews 8(2), 25-29 (2005).         [ Links ]

[29] Nabney, I.: NETLAB: Algorithms for Pattern Recognition. Advances in Pattern Recognition. Springer (2002).         [ Links ]

[30] Brown, K.: Diversity in neural networks ensembles. Ph.D. thesis, School of Computer Science, University of Birmingham, UK (2004).         [ Links ]

[31] Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms, second ed. McGraw-Hill (2002).         [ Links ]

[32] Gropp, W.: Learning from the success of mpi. In: HiPC '01: Proceedings of the 8th International Conference on High Performance Computing, pp. 81-94. Springer-Verlag, London, UK (2001).         [ Links ]

[33] Hillis, W.D., Steele Jr., G.L.: Data parallel algorithms. Commun. ACM 29 (12), 1170-1183 (1986). DOI http://doi.acm.org/10.1145/7902.7903.         [ Links ]

[34] Cantu-Paz, E., Goldberg, D.: Efficient parallel genetic algorithms: theory and practice. Computer Methods in Applied Mechanics and Engineering 186 (1), 221-238 (2000).         [ Links ]

[35] Amdahl, G.: Readings in computer architecture, chap. Validity of the single processor approach to achieving large scale computing capabilities, pp. 79-81. Morgan Kaufmann Publishers Inc. (2000).         [ Links ]

[36] Luo, X.Q., Gregory, E.B., Yang, J.C., Wang, Y.L., Chang, D., Lin, Y.: Parallel computing on a pc cluster. CoRR cs.DC/0109004 (2001).         [ Links ]

[37] K.S.S., Jung, K.: Gpu implementation of neural networks. Pattern Recognition 37(6), 1311 - 1314 (2004). DOI DOI: 10.1016/j.patcog.2004.01.013.         [ Links ]

[38] Luo, Z., Liu, H., Wu, X.: Artificial neural network computation on graphic process unit. In: Neural Networks, 2005. IJCNN '05. Proceedings. 2005 IEEE International Joint Conference on, vol. 1, pp. 622-626 vol. 1 (2005). DOI 10.1109/IJCNN.2005.1555903. URL http://dx.doi.org/10.1109/IJCNN.2005.1555903.         [ Links ]

[39] Brandstetter, A., Artusi, A.: Radial basis function networks gpu based implementation. IEEE Transaction on Neural Network 19(12), 2150-2161 (2008).         [ Links ]

[40] NVIDIA: NVIDIA CUDA Programming Guide. NVIDIA (2009). http://www.nvidia.com.         [ Links ]

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons