SciELO - Scientific Electronic Library Online

 
vol.17 issue1An Application of Fuzzy Logic for Hardware/Software Partitioning in Embedded SystemsMultiobjective Adaptive Metaheuristics author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

Print version ISSN 1405-5546

Comp. y Sist. vol.17 n.1 México Jan./Mar. 2013

 

Artículos

 

Feature Selection using Associative Memory Paradigm and Parallel Computing

 

Selección de características utilizando el paradigma de memoria asociativa y computación paralela

 

Mario Aldape-Pérez1,2, Cornelio Yañez-Márquez1, Oscar Camacho-Nieto1, and Ángel Ferreira-Santiago2

 

1 Instituto Politécnico Nacional (CIC), Distrito Federal, México.

2 Instituto Politécnico Nacional (ESCOM), Distrito Federal, México www.aldape.mx, cyanez@cic.ipn.mx, oscarc@cic.ipn.mx, www.cornelio.org.mx

 

Article received on 12/10/2012
Accepted on 18/12/2012.

 

Abstract

Performance of most pattern classifiers is improved when redundant or irrelevant features are removed. Nevertheless, this is mainly achieved by highly demanding computational methods or successive classifiers' construction. This paper shows how the associative memory paradigm and parallel computing can be used to perform Feature Selection tasks. This approach uses associative memories in order to get a mask value which represents a subset of features which clearly identifies irrelevant or redundant information for classification purposes. The performance of the proposed associative memory algorithm is validated by comparing classification accuracy of the suggested model against the performance achieved by other well-known algorithms. Experimental results show that associative memories can be implemented in parallel computing infrastructure, reducing the computational costs needed to find an optimal subset of features which maximizes classification performance.

Keywords: Feature selection, associative memory, pattern classification.

 

Resumen

El rendimiento en la mayoría de los clasificadores de patrones se mejora cuando las características redundantes o irrelevantes son eliminadas. Sin embargo, esto se logra a través de la construcción de clasificadores sucesivos o mediante algoritmos iterativos que implican altos costos computacionales. Este trabajo muestra la aplicación del paradigma de memoria asociativa y la computación paralela para realizar tareas de selección de características. Este enfoque utiliza las memorias asociativas para obtener el valor de una máscara que identifica claramente la información irrelevante o redundante para fines de clasificación. El desempeño del algoritmo propuesto es validado a través de la comparación de la precisión predictiva alcanzada por este modelo contra el desempeño alcanzado por otros algoritmos reconocidos en la literatura actual. Los resultados experimentales muestran que las memorias asociativas pueden ser implementadas en infraestructura de computo paralelo, reduciendo los costos computacionales necesarios para encontrar el subconjunto óptimo de características de maximiza el desempeño de clasificación.

Palabras clave: Selección de características, memorias asociativas, clasificación de patrones.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

Acknowledgements

The authors wish to thank the following institutions for their support to develop the present work: Science and Technology National Council of Mexico (CONACyT), National System of Researchers of Mexico (SNI), National Polytechnic Institute of Mexico (IPN, Project No. SIP-IPN 20121556) and the Institute of Science and Technology of the Federal District (ICyT DF, Project No. PIUTE10-77).

 

References

1. Maldonado, S., Weber, R., & Basak, J. (2011). Simultaneous feature selection and classification using kernel-penalized support vector machines. Information Sciences: An International Journal, 181(1), 115-128.         [ Links ]

2. Youn, E., Koenig, L., Jeong, M.K., & Baek, S.H. (2010). Support vector-based feature selection using Fisher's linear discriminant and Support Vector Machine. Expert Systems with Applications: An International Journal, 37(9), 6148-6156.         [ Links ]

3. Lan, Y., Soh, Y.C., & Huang, G.B. (2009). Ensemble of online sequential extreme learning machine. Neurocomputing, 72(13-15), 3391-3395.         [ Links ]

4. Kittler, J., Hatef, M., Duin, R.P.W., & Matas, J. (1998). On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(3), 226-239.         [ Links ]

5. Ozcift, A. & Gulten, A. (2011). Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms. Computer Methods and Programs in Biomedicine, 104(3), 443-451.         [ Links ]

6. Steinbuch, K. (1961). Die lernmatrix. Kybernetik, 1(1), 36-45.         [ Links ]

7. Steinbuch, K. (1964). Adaptive networks using learning matrices. Kybernetik, 2(4), 148-152.         [ Links ]

8. Kohonen, T. (1972). Correlation Matrix Memories. IEEE Transactions on Computers, C-21(4), 353-359.         [ Links ]

9. Acevedo-Mosqueda, M.E., Yáñez-Márquez, C., & López-Yáñez, I. (2007). Alpha-beta bidirectional associative memories: theory and applications. Neural Processing Letters, 26(1), 1 -40.         [ Links ]

10. Sacramento, J. & Wichert, A. (2011). Tree-like hierarchical associative memory structures. Neural Networks, 24(2), 143-147.         [ Links ]

11. Kohavi, R. & John, G. H. (1997). Wrappers for Feature Subset Selection. Artificial Intelligence -Special issue on relevance, 97(1-2), 273-324.         [ Links ]

12. Dornaika, F., Lazkano, E., & Sierra, B. (2011).Improving dynamic facial expression recognition with feature subset selection. Pattern Recognition Letters, 32(5), 740-748.         [ Links ]

13. Rosen, K. H. (2007). Discrete Mathematics and Its Applications (6th ed.), Boston: McGraw-Hill Higher Education.         [ Links ]

14. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I.H. (2009). The WEKA data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1), 10-18.         [ Links ]

15. Witten, I.H. & Frank, E. (2005). Data Mining: Practical Machine Learning Tools and Techniques (2nd ed.), Amsterdam; Boston, MA: Morgan Kaufmann.         [ Links ]

16. Shafi, A., Carpenter, B., & Baker, M. (2009). Nested parallelism for multi-core HPC systems using Java. Journal of Parallel and Distributed Computing, 69(6), 532-545.         [ Links ]

17. Wolpert, D.H. & Macready, W.G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82.         [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License