SciELO - Scientific Electronic Library Online

 
vol.3 número3Monitoring of the pH using ISFET sensors in electroplating processesCharacterization of a plane on a coordinate measuring machine of large dimensions índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Journal of applied research and technology

versión On-line ISSN 2448-6736versión impresa ISSN 1665-6423

J. appl. res. technol vol.3 no.3 Ciudad de México dic. 2005

 

A wearable neural interface for real time translation of Spanish Deaf Sign Language to voice and writing

 

R. Villa-Angulo1 & H. Hidalgo-Silva2

 

1 Instituto de Ingeniería Universidad Autónoma de Baja California Calle de La Normal S/N y Blvd. Benito Juárez, Fracc. Insurgentes Este. Mexicali, Baja California, 21280, México. ravilla@uabc.mx

2 Departamento de Ciencias de la Computación, CICESE, Km. 107 Carr. Tijuana-Eda, Ensenada, Baja California 22800, México. Hugo@cicese.mx

 

Received: December 11th 2002.
Approved: November 30th 2005.

 

Abstract

This paper describes a work related to the design and implementation of a communication tool for persons with speech and hearing disabilities. This tool provides to the user a Human-Computer interface capable of the capture and recognition of gestures belonging to the Mexican Spanish Sign Alphabet. To capture the manual expressions, a data-glove constructed to sense the position of fifteen articulations of one of the user's hand is described. A location system that detects the position and movements of the hand with respect to the user's body is also constructed. The data-glove and location system signals are processed by a pair of programmable automatons. The automaton's outputs are sent to a personal computer that realizes the gesture recognition and interpretation tasks. Artificial neural network techniques are utilized to implement the mappings of the space of information generated by the instruments to the interpretation space, where the representation of the gestures are found. Once a gesture is captured and interpreted, it is presented in written form through a screen mounted in the clothes of the user, and in verbal form by a speaker.

Keywords: Neural Networks, Human-Computer Interfaces, Signed Languages.

 

Resumen

En este documento se presenta el trabajo de diseño e implementación de una herramienta de comunicación para personas discapacitadas del habla y del oído. Esta herramienta es conceptualizada como una interfaz humano-computadora capaz de capturar y reconocer ademanes del lenguaje español signado de México. Para realizar la captura de las expresiones manuales, el sistema utiliza un guante de datos basado en sensores de flexión capaces de medir la posición de quince articulaciones de una de las manos de un usuario, y un sistema basado en sensores de ultrasonido para detectar la posición y movimientos de la mano con respecto al cuerpo del usuario. Un par de autómatas programables realiza el tratamiento de la información proveniente del guante de datos y del rastreador de movimientos y una computadora personal realiza el trabajo de reconocimiento e interpretación de los ademanes.

Para realizar el procesamiento de la información se utilizan técnicas de redes neuronales artificiales con las cuales se realizan mapeos del espacio de información generado por los instrumentos a un espacio de soluciones donde se encuentran los significados de los ademanes representados por el usuario. Una vez capturado e interpretado un ademán, éste es presentado en forma escrita a través de una pantalla montada en la ropa del usuario, y en forma sonora a través de una bocina la cual pronuncia la letra que ha sido representada.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

References

[1] Pavlovic V. I., Rajeev S., Thomas S. H., Visual Interpretation of Hand Gestures for Human-computer Interaction: A Review, IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 19, No 7, pp. 677-695, 1997.         [ Links ]

[2] Starner, T. & Pentland A. Visual Recognition of American Sign Language Using Hidden Markov Models. In: International Workshop on Automatic Face and Gesture Recognition (IWAFGR), Zurich, Switzerland, pp. 189-194, 1995.         [ Links ]

[3] Fels S. S. and Hinton, G.E., Glove-Talk: A Neural Network Interface Between a Data-Glove and a Speech Synthesizer, IEEE Transactions on Neural Networks, Vol. 4, No. 1, pp. 2-8, 1993.         [ Links ]

[4] Fels S. S. and Hinton, G.E., Glove-Talk II: A Neural-Network Interface which Maps Gestures to Parallel formant Speech Synthesizer Controls, IEEE Transactions on Neural Networks, Vol. 9, No. 1., pp. 205-212, 1998.         [ Links ]

[5] Sujan V.A & Meggiolaro, M. A., Sign Language Recognition Using Competitive Learning in the HAVNET Neural Network, In Applications of Artificial Neural Networks in Imaging V (Electronic and Imaging 2000), N.M. Nasrabadi and A. K. Katsaggelos, editors, San Jose, CA., volume 3962 of Proc. SPIE, pp. 2-12, 2000.         [ Links ]

[6] Kadous, M. W., Temporal Classification: Extending the Classification Paradigm to Multivariate Time Series, Ph.D. thesis, The University of New South Wales, School of Computer Science and Engineering, 2002.         [ Links ]

[7] Barthelmess, P., Ensemble-based Human Communication Recognition, University of Colorado at Boulder Technical Report CU-CS-935-02, Department of Computer Science, 2002.         [ Links ]

[8] Johnston T, Auslan: The Sign Language of the Australian Deaf Community, Ph.D. thesis, Department of Linguistics, University of Sydney, 1989.         [ Links ]

[9] Moghaddam B. and Alex, P., Probabilistic Visual Learning for Objects Representation, IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 19, No. 7, pp. 696-710, 1997.         [ Links ]

[10] Bobick A. F. & Wilson A. D., A State-Based Approach to the Representation and Recognition of Gestures, IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 19, No. 12, pp. 884-900, 1997.         [ Links ]

[11] Zhao M., Francis K., Queck H., Wu X., RIEVL: Recursive Induction Learning Hand Gesture Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 11, pp. 1174-1185, 1998.         [ Links ]

[12] Wilson A. D. & Aaron F. B., Parametric Hidden Markov Models for Gesture Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 21, No. 9, pp. 884-900, 1999.         [ Links ]

[13] Kramer J. & Leifer L., The Talking Glove: A Speaking Aid for Nonvocal Deaf and Deaf-Blind Individuals, in RESNA 12th Ann. Conf., pp. 471-472, New Orleans, Louisiana, 1989.         [ Links ]

[14] Quam D. L., Gesture Recognition with a DataGlove, in IEEE National Aerospace and Electronics Conf., Vol. 2. pp. 755-760, 1990.         [ Links ]

[15] Serafin-de-Fleischmann M. E., Lenguaje Manual, Aprendizaje de Español Signado para Personas Sordas, Editorial Trillas, 1996.         [ Links ]

[16] Kohonen T., Self-Organizing Maps, Springer Series in Information Science, Vol. 30, Third Edition, 2001.         [ Links ]

[17] Bishop C. M., Neural Networks for Pattern Recognition, Clarendon Press., Oxford, 1995.         [ Links ]

[18] Lawrence S., Giles C. L. and Fong, S., Natural Language Grammatical Inference with Recurrent Neural Networks, IEEE Transactions on Knowledge and Data Engineering, Vol. 12, No. 1, pp. 126-140, 2000.         [ Links ]

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons