SciELO - Scientific Electronic Library Online

 
vol.18 issue2A KKT Simplex Method for Efficiently Solving Linear Programs for Grasp Analysis Based on the Identification of Nonbinding ConstraintsTwo-Degrees-of-Freedom Robust PID Controllers Tuning Via a Multiobjective Genetic Algorithm author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

On-line version ISSN 2007-9737Print version ISSN 1405-5546

Comp. y Sist. vol.18 n.2 Ciudad de México Apr./Jun. 2014

https://doi.org/10.13053/CyS-18-2-2014-030 

Artículos regulares

 

An Adaptive Random Search for Unconstrained Global Optimization

 

Búsqueda aleatoria adaptiva para problemas de optimizacón global sin restricciones

 

Jonás Velasco1, Mario A. Saucedo-Espinosa1, Hugo Jair Escalante2, Karlo Mendoza1, César Emilio Villarreal-Rodríguez1, Óscar L. Chacón-Mondragón1, and Arturo Berrones1

 

1 Posgrado en Ingeniería de Sistemas, Facultad de Ingeniería Mecánica y Eléctrica, Universidad Autónoma de Nuevo León, Mexico jonasovich2@gmail.com, m.a.saucedo.e@gmail.com, karlo.mendoza@gmail.com, cesarevr@gmail.com, olchacon.uanl@gmail.com, arturo.berrones@gmail.com

2 Departamento de Ciencias Computacionales, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico hugo.jair@gmail.com

 

Abstract

Adaptive Gibbs Sampling (AGS) algorithm is a new heuristic for unconstrained global optimization. AGS algorithm is a population-based method that uses a random search strategy to generate a set of new potential solutions. Random search combines the one-dimensional Metropolis-Hastings algorithm with the multidimensional Gibbs sampler in such a way that the noise level can be adaptively controlled according to the landscape providing a good balance between exploration and exploitation over all search space. Local search strategies can be coupled to the random search methods in order to intensify in the promising regions. We have performed experiments on three well known test problems in a range of dimensions with a resulting testbed of 33 instances. We compare the AGS algorithm against two deterministic methods and three stochastic methods. Results show that the AGS algorithm is robust in problems that involve central aspects which is the main reason of global optimization problem difficulty including high-dimensionality, multi-modality and non-smoothness.

Keywords: Random search, Metropolis-Hastings algorithm, heuristics, global optimization.

 

Resumen

El algoritmo del Muestreador Adaptivo de Gibbs (MAG) es una nueva heurística para la optimización global irrestricta. El algoritmo MAG es un método basado en poblaciones que utiliza una estrategia de búsqueda aleatoria para generar un nuevo conjunto de soluciones potenciales. La búsqueda aleatoria combina el algoritmo unidimensional de Metrópolis-Hastings con el multidimensional muestreador de Gibbs, de tal manera que el nivel de ruido se puede controlar adaptativamente de acuerdo al panorama de la función. Existe un buen equilibrio entre la exploración y la explotación en todo el espacio de búsqueda. Una estrategia de búsqueda local puede acoplarse a la búsqueda aleatoria con el fin de intensificar en las regiones prometedoras. Los experimentos se desarrollaron sobre tres problemas conocidos en un rango de dimensiones, con un banco de prueba resultante de 33 instancias. El algoritmo MAG se comparó contra dos métodos deterministas y tres métodos estocásticos. Los resultados muestran que el algoritmo MAG es robusto en problemas que involucran aspectos centrales que determinan principalmente la dificultad de los problemas de optimización global, es decir, de alta dimensionalidad, multimodalidad y la no suavidad.

Palabras clave: Búsqueda aleatoria, algoritmo de Metrópolis-Hastings, heurísticas, optimización global.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

Acknowledgements

This work was supported in part by the Mexican National Council for Science and Technology (CONACyT), grant 206705, and by UANL-PAICYT program, grant "Inference based on density estimation".

 

References

1. Albert, J. (2009). Bayesian computation with R. Springer.         [ Links ]

2. Bäck, T. (1996). Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms. Oxford University Press, Oxford, UK.         [ Links ]

3. Berrones, A. (2008). Stationary probability density of stochastic search processes in global optimization. Journal of Statistical Mechanics: Theory and Experiment, 2008(01), P01013.         [ Links ]

4. Berrones, A. (2010). Bayesian inference based on stationary fokker-planck sampling. Neural Computation, 22(6), 1573-1596.         [ Links ]

5. Canty, A. J. (1999). Hypothesis tests of convergence in markov chain monte carlo. Journal of Computational and Graphical Statistics, 8(1), 93-108.         [ Links ]

6. Engelbrecht, A. P. (2006). Fundamentals of Computational Swarm Intelligence. John Wiley & Sons.         [ Links ]

7. Fletcher, R. & Reeves, C. M. (1964). Function minimization by conjugate gradients. The Computer Journal, 7(2), 149-154.         [ Links ]

8. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.         [ Links ]

9. Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671-680.         [ Links ]

10. Laguna, M. & Marti, R. (2005). Experimental testing of advanced scatter search designs for global optimization of multimodal functions. Journal of Global Optimization, 33(2), 235-255.         [ Links ]

11. Mckinnon, K. I. M. (1998). Convergence of the nelder-mead simplex method to a nonstationary point. SIAM Journal on Optimization, 9(1), 148-158.         [ Links ]

12. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21, 1087-1092.         [ Links ]

13. Nelder, J. A. & Mead, R. (1965). A simplex method for function minimization. The Computer Journal, 7(4), 308-313.         [ Links ]

14. Pintér, J. D. (2006). Global Optimization: Scientific and Engineering Case Studies. Springer.         [ Links ]

15. Roberts, G. O. & Polson, N. G. (1994). On the geometric convergence of the gibbs sampler. Journal of the Royal Statistical Society B, 56(2), 377-384.         [ Links ]

16. Roberts, G. O. & Rosenthal, J. S. (2001). Optimal scaling for various metropolis-hastings algorithms. Statistical Science, 16(4), 351-367.         [ Links ]

17. Sawilowsky, S. S. (2002). Fermat, schubert, einstein, and behrens-fisher: The probable difference between two means when σ1 ≠ σ2. Journal of Modern Applied Statistical Methods, 1(2), 461-472.         [ Links ]

18. Shi, Y. & Eberhart, R. C. (1999). Empirical study of particle swarm optimization. Proceedings ofthe 1999 Congress on Evolutionary Computation, 1999. CEC 99, 3, 1945-1950.         [ Links ]

19. Storn, R. & Price, K. (1997). Differential evolution -a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341-359.         [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License