SciELO - Scientific Electronic Library Online

 
vol.18 issue1Eyelid Detection Method Based on a Fuzzy Multi-Objective OptimizationEffects of Interpolation on Segmentation in Cell Imaging author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

Print version ISSN 1405-5546

Comp. y Sist. vol.18 n.1 México Jan./Mar. 2014

http://dx.doi.org/10.13053/CyS-18-1-2014-020 

Artículos

 

Towards Swarm Diversity: Random Sampling in Variable Neighborhoods Procedure Using a Lévy Distribution

 

Hacia la diversidad de la bandada: procedimiento RSVN usando una distribución de Lévy

 

Gonzalo Nápoles1, Isel Grau2, Marilyn Bello1, and Rafael Bello1

 

1 Artificial Intelligence Laboratory, Universidad Central "Marta Abreu" de Las Villas, Cuba. gnapoles@uclv.edu.cu, mbello@uclv.edu.cu, rbellop@uclv.edu.cu

2 Bionformatics Laboratory, Universidad Central "Marta Abreu" de Las Villas, Cuba. igrau@uclv.edu.cu

 

Abstract

Particle Swarm Optimization (PSO) is a non-direct search method for numerical optimization. The key advantages of this metaheuristic are principally associated to its simplicity, few parameters and high convergence rate. In the canonical PSO using a fully connected topology, a particle adjusts its position by using two attractors: the best record stored for the current agent, and the best point discovered for the entire swarm. It leads to a high convergence rate, but also progressively deteriorates the swarm diversity. As a result, the particle swarm frequently gets attracted by sub-optimal points. Once the particles have been attracted to a local optimum, they continue the search process within a small region of the solution space, thus reducing the algorithm exploration. To deal with this issue, this paper presents a variant of the Random Sampling in Variable Neighborhoods (RSVN) procedure using a Lévy distribution, which is able to notably improve the PSO search ability in multimodal problems.

Keywords: Swarm diversity, local optima, premature convergence, RSVN procedure, Lévy distribution.

 

Resumen:

Particle Swarm Optimization (PSO) es un método de búsqueda no directo para la optimización numérica. Las principales ventajas de esta meta-heurística están relacionadas principalmente con su simplicidad, pocos parámetros y alta tasa de convergencia. En el PSO canónico usando una topología totalmente conectada, una partícula ajusta su posición usando dos atractores: el mejor registro almacenado por el individuo y el mejor punto descubierto por la bandada completa. Este esquema conduce a un alto factor de convergencia, pero también deteriora la diversidad de la población progresivamente. Como resultado la bandada de partículas frecuentemente es atraída por puntos sub-óptimos. Una vez que las partículas han sido atraídas hacia un óptimo local, ellas continúan el proceso de búsqueda dentro de una región muy pequeña del espacio de soluciones, reduciendo las capacidades de exploración del algoritmo. Para tratar esta situación este artículo presenta una variante del procedimiento Random Sampling in Variable Neighborhoods (RSVN) usando una distribución de Lévy. Este algoritmo es capaz de mejorar notablemente la capacidad de búsqueda de los algoritmos PSO en problemas multimodales de optimización.

Palabras clave: Diversidad de la bandada, óptimos locales, convergencia prematura, procedimiento RSVN, distribución de Lévy.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

References

1. Kennedy, J. and Eberhart, R. (1995). Particle Swarm Optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, 1942—1948.         [ Links ]

2. Eberhart, R. & Kennedy, J. (1995). A New Optimizer using Particle Swarm Theory. Sixth International Symposium on Micromachine and Human Science, Nagoya, Japan, 39-43.         [ Links ]

3. Bratton, D. & Kennedy, J. (2007). Defining a Standard for Particle Swarm Optimization. IEEE Swarm Intelligence Symposium (SIS 2007), Honolulu, Hawai, 120-127.         [ Links ]

4. Wang, Y., Li, B., Weise, T., Wuang, J., Yuan, B., & Tian, Q. (2011). Self-adaptive learning based particle swarm optimization. Information Sciences: an International Journal, 181(20), 4515-4538.         [ Links ]

5. Kennedy, J., Russell, C.E., & Shi, Y. (2001). Swarm Intelligence. San Francisco: Morgan Kaufmann Publishers.         [ Links ]

6. Evers, G.I. & Ben, M. (2009). Regrouping Particle Swarm Optimization: A new Global Optimization Algorithm with Improved Performance Consistency across Benchmarks. IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, 3901-3908.         [ Links ]

7. Liu, Y., Ling, X., Shi, Z., Mingwei, L.V., Fang, Ji., & Zhang, L. (2011). A survey on Particle Swarm Optimization algorithms for multimodal function optimization. Journal of Software, 6(12), 2449-2455.         [ Links ]

8. Chen, S. & Montgomery, J. (2011). A simple strategy to maintain diversity and reduce crowding in Particle Swarm Optimization. AI 2011: Advances in Artificial Intelligence. Lecture Notes in Computer Science, 7106, 281-290.         [ Links ]

9. Kennedy, J. & Mendes, R. (2006). Neighborhood topologies in fully informed and best of neighborhood particle swarms. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 36(4), 515-519.         [ Links ]

10. Nápoles, G., Grau, I., & Bello, R. (2012). Particle Swarm Optimization with Random Sampling in Variable Neighbourhoods for solving Global Minimization Problems. Swarm Intelligence, Lecture Notes in Computer Science, 7461, 352-353.         [ Links ]

11. Nápoles, G., Grau, I., & Bello, R. (2012). Constricted Particle Swarm Optimization based algorithm for global optimization. POLIBITS, 46, 511.         [ Links ]

12. Clerc, M. & Kennedy, J. (2002). The particle swarm -explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58-73.         [ Links ]

13. Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle Swarm Optimization. Swarm Intelligence, 1(1), 37-57.         [ Links ]

14. Shi, Y. & Eberhart, R. (1998). A Modified Particle Swarm Optimizer. 1998 IEEE International Conference on Evolutionary Computation Proceedings, Anchorage, AK, 69-73.         [ Links ]

15. Trelea, I.C. (2003). The particle swarm optimization algorithm: convergence analysis and parameter selection. Information Processing Letters, 85(6), 317-325.         [ Links ]

16. Kennedy, J. (2003). Bare bones particle swarms. 2003 IEEE Swarm Intelligence Symposium, Indianapolis, USA, 80-87.         [ Links ]

17. Richer, T.J. & Blackwell, T.M. (2006). The Lévy particle swarm. 2006 IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 808-815.         [ Links ]

18. Kennedy, J. (2004). Probability and dynamics in the particle swarm. IEEE Congress on Evolutionary Computation (CEC 2004), Portland, OR, USA, 1, 340-347.         [ Links ]

19. Liang, J.J., Qin, A.K., Suganthan, P.N., & Baskar, S. (2006). Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation, 10(3), 281-295.         [ Links ]

20. Wang, H., Li, H., Liu, Y., Li, C., & Zeng, S. (2007). Opposition-based Particle Swarm Algorithm with Cauchy mutation. IEEE Congress on Evolutionary Computation, Singapore, 4750-4756.         [ Links ]

21. Tizhoosh, H.R. (2006). Opposition-based reinforcement learning. Journal of Advanced Computational Intelligence and Intelligent and Intelligent Informatics, 10(4), 578-585.         [ Links ]

22. Riget, J. & Vesterstram, J.S.(2002). A Diversity-Guided Particle Swarm Optimizer - the ARPSO (Technical Report no. 2002-02). Aarhus C, Denmark: Aarhus Universitet.         [ Links ]

23. Pant, M. & Thangaraj, R. (2007). A new Particle Swarm Optimization with quadratic crossover. International Conference on Advanced Computing and Communications, Guwahati, Assam, 81-86.         [ Links ]

24. Zan, Z.H., Zhang, J., Li, Y., & Chung, H.S.H. (2009). Adaptive Particle Swarm Optimization. IEEE Transactions on Systems, Man and Cybernetics - Part B: Cybernetics, 39(6), 1362-1381.         [ Links ]

25. Olorunda, O. & Engelbrecht, A.P. (2008). Measuring Exploration/Exploitation in Particle Swarms using swarm diversity. IEEE Congress on Evolutionary Computation, Hong Kong, China, 1128-1134.         [ Links ]

26. Suganthan, P.N., et al. (2005). Problem definition and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical Report 2005005        [ Links ]

27. Kirkpatrick, S., Gellat, C.D., & Vecchi, M.P. (1983). Optimization by simulated annealing. Science, 220(4598), 671-680.         [ Links ]

28. Nápoles, G., Grau, I., León, M., & Grau, R. (2013). Modelling, aggregation and simulation of a dynamic biological system through Fuzzy Cognitive Maps. Advances in Computational Intelligence, Lecture Notes in Computer Science, 7630, 188-199.         [ Links ]

29. Storn, R. & Price, K. (1997). Differential Evolution - A simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341-359.         [ Links ]

30. Pavlyukevich, I. (2007). Lévy flights, non-local search and simulated annealing. Journal of Computational Physics, 226(2), 1830-1844.         [ Links ]

31. Yang, X.S. (2008). Nature-Inspired Metaheuristic Algorithms. United Kingdom: Luniver Press.         [ Links ]

32. Lee, C.Y. & Yao, X. (2004). Evolutionary programming using mutations based on the Lévy probability distribution. IEEE Transactions on Evolutionary Computation, 8(1), 1-13.         [ Links ]

33. Lévy, P. (1937). Theorie de l'addition des Veriables Aleatoires, Paris: Guathier-Villars.         [ Links ]

34. Gnedenko, B.V. & Kolmogorov, A.N. (1954). Limit Distribution for Sums of Independent Random Variables, Michigan: Addison-Wesley.         [ Links ]

35. Mantegna, R.N. (1994). Fast, accurate algorithm for numerical simulation of Lévy stable stochastic processes. Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics, 49(5), 4677-4683.         [ Links ]

36. Higgins, J.J. (2003). Introduction to Modern Nonparametric Statistics. Pacific Grove, CA : Brooks/Cole.         [ Links ]

37. García, S., Fernández, A., Luengo, J., & Herrera, F. (2009). A study of statistical techniques and performance measures for genetics-based machine learning: Accuracy and interpretability. Soft Computing, 13(10), 959-977.         [ Links ]

38. García, S., Molina, D., Lozano, M., & Herrera, F. (2009). A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behavior: A case study on the CEC'2005 special session on real parameter optimization. Journal of Heurisitcs, 15(6), 617-644.         [ Links ]

39. Friedman, M. (1937). The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32(200), 675-701.         [ Links ]

40. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1(6), 80-83.         [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License