<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>1405-5546</journal-id>
<journal-title><![CDATA[Computación y Sistemas]]></journal-title>
<abbrev-journal-title><![CDATA[Comp. y Sist.]]></abbrev-journal-title>
<issn>1405-5546</issn>
<publisher>
<publisher-name><![CDATA[Instituto Politécnico Nacional, Centro de Investigación en Computación]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S1405-55462013000400012</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[3D Modeling of the Mexican Sign Language for a Speech-to-Sign Language System]]></article-title>
<article-title xml:lang="es"><![CDATA[Modelado 3D del lenguaje de señas mexicano para un sistema de voz-a-lenguaje de señas]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Caballero-Morales]]></surname>
<given-names><![CDATA[Santiago-Omar]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Trujillo-Romero]]></surname>
<given-names><![CDATA[Felipe]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Technological University of the Mixteca Postgraduate Division ]]></institution>
<addr-line><![CDATA[Oaxaca ]]></addr-line>
<country>Mexico</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>12</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>12</month>
<year>2013</year>
</pub-date>
<volume>17</volume>
<numero>4</numero>
<fpage>593</fpage>
<lpage>608</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_arttext&amp;pid=S1405-55462013000400012&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_abstract&amp;pid=S1405-55462013000400012&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_pdf&amp;pid=S1405-55462013000400012&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[There are many people with communication impairments, deafness being one of the most common of them. Deaf people use Sign Language (SL) to communicate, and translation systems (Speech/Text-to-SL) have been developed to assist such communication. However, since SLs are dependent of countries and cultures, there are differences between grammars, vocabularies, and signs, even if these come from places with similar spoken languages. In Mexico, work in this field is very limited, so any development must consider the characteristics of the Mexican-Sign-Language (MSL). In this paper, we present a new approach to creating a Mexican Speech-to-SL system, integrating 3D modeling of the MSL with a multi-user Automatic Speech Recognizer (ASR) with dynamic adaptation. The 3D models (avatar) were developed by means of motion capture of a MSL performer. Kinect was used as a 3D sensor for the motion capture process, and DAZ Studio 4 was used for its animation. The multi-user ASR was developed using the HTK and Matlab as the programming platform for a Graphical User Interface (GUI). Experiments with a vocabulary set of 199 words were performed to validate the system. An accuracy of 96.2% was achieved for the ASR and interpretation into MSL of 70 words and 20 spoken sentences. The 3D avatar presented clearer realizations than those of standard video recordings of a human MSL performer.]]></p></abstract>
<abstract abstract-type="short" xml:lang="es"><p><![CDATA[Hay muchas personas con problemas para comunicarse, siendo la sordera una de las más comunes. Personas con este problema hacen uso de Lenguaje de Señas (LSs) para comunicarse, y sistemas de traducción (Voz/Texto-a-LS) se han desarrollado para asistir a esta tarea. Sin embargo, porque los LSs son dependientes de países y culturas, hay diferencias entre gramáticas, vocabularios y señas, incluso si estos provienen de lugares con lenguajes hablados similares. En México, el trabajo es muy limitado en este campo, y cualquier desarrollo debe considerar las características del Lenguaje de Señas Mexicano (LSM). En este artículo, presentamos nuestro enfoque para un sistema de Voz-a-LS Mexicano, integrando el modelado 3D del LSM con un Reconocedor Automático de Voz (RAV) multi-usuario con adaptación dinámica. Los modelos 3D (avatar) fueron desarrollados por medio de captura de movimiento de un signante del LSM. Kinect fue usado como un sensor 3D para el proceso de captura de movimiento, y DAZ Studio 4 fue usado para su animación. El RAV multi-usuario fue desarrollado usando HTK y Matlab fue la plataforma de programación para la Interfaz Gráfica de Usuario (GUI). Experimentos con un vocabulario de 199 palabras fueron realizados para validar el sistema. Una precisión del 96.20% fue obtenida para el RAV e interpretación en vocabulario del LSM de 70 palabras y 2o frases habladas. Las realizaciones del avatar 3D fueron más claras que aquellas de grabaciones de video de un signante humano del LSM.]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[Mexican sign language]]></kwd>
<kwd lng="en"><![CDATA[automatic speech recognition]]></kwd>
<kwd lng="en"><![CDATA[human-computer interaction]]></kwd>
<kwd lng="es"><![CDATA[Lenguaje de señas mexicano]]></kwd>
<kwd lng="es"><![CDATA[reconocimiento automático de voz]]></kwd>
<kwd lng="es"><![CDATA[interacción humano-computadora]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[  	    <p align="justify"><font face="verdana" size="4">Art&iacute;culos regulares</font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="4"><b>3D Modeling of the Mexican Sign Language for a Speech&#45;to&#45;Sign Language System</b></font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="3"><b>Modelado 3D del lenguaje de se&ntilde;as mexicano para un sistema de voz&#45;a&#45;lenguaje de se&ntilde;as</b></font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="2"><b>Santiago&#45;Omar Caballero&#45;Morales, Felipe Trujillo&#45;Romero</b></font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><i>Postgraduate Division, Technological University of the Mixteca, Oaxaca,</i> <i>Mexico.</i> <a href="mailto:scaballero@mixteco.utm.mx">scaballero@mixteco.utm.mx</a>, <a href="mailto:ftrujillo@mixteco.utm.mx">ftrujillo@mixteco.utm.mx</a></font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2">Article received on 15/10/2012    <br> 	Accepted 21/06/2013</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Abstract</b></font></p>  	    <p align="justify"><font face="verdana" size="2">There are many people with communication impairments, deafness being one of the most common of them. Deaf people use Sign Language (SL) to communicate, and translation systems (Speech/Text&#45;to&#45;SL) have been developed to assist such communication. However, since SLs are dependent of countries and cultures, there are differences between grammars, vocabularies, and signs, even if these come from places with similar spoken languages. In Mexico, work in this field is very limited, so any development must consider the characteristics of the Mexican&#45;Sign&#45;Language (MSL). In this paper, we present a new approach to creating a Mexican Speech&#45;to&#45;SL system, integrating 3D modeling of the MSL with a multi&#45;user Automatic Speech Recognizer (ASR) with dynamic adaptation. The 3D models (avatar) were developed by means of motion capture of a MSL performer. Kinect was used as a 3D sensor for the motion capture process, and DAZ Studio 4 was used for its animation. The multi&#45;user ASR was developed using the HTK and Matlab as the programming platform for a Graphical User Interface (GUI). Experiments with a vocabulary set of 199 words were performed to validate the system. An accuracy of 96.2% was achieved for the ASR and interpretation into MSL of 70 words and 20 spoken sentences. The 3D avatar presented clearer realizations than those of standard video recordings of a human MSL performer.</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Keywords:</b> Mexican sign language, automatic speech recognition, human&#45;computer interaction.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Resumen</b></font></p>  	    <p align="justify"><font face="verdana" size="2">Hay muchas personas con problemas para comunicarse, siendo la sordera una de las m&aacute;s comunes. Personas con este problema hacen uso de Lenguaje de Se&ntilde;as (LSs) para comunicarse, y sistemas de traducci&oacute;n (Voz/Texto&#45;a&#45;LS) se han desarrollado para asistir a esta tarea. Sin embargo, porque los LSs son dependientes de pa&iacute;ses y culturas, hay diferencias entre gram&aacute;ticas, vocabularios y se&ntilde;as, incluso si estos provienen de lugares con lenguajes hablados similares. En M&eacute;xico, el trabajo es muy limitado en este campo, y cualquier desarrollo debe considerar las caracter&iacute;sticas del Lenguaje de Se&ntilde;as Mexicano (LSM). En este art&iacute;culo, presentamos nuestro enfoque para un sistema de Voz&#45;a&#45;LS Mexicano, integrando el modelado 3D del LSM con un Reconocedor Autom&aacute;tico de Voz (RAV) multi&#45;usuario con adaptaci&oacute;n din&aacute;mica. Los modelos 3D (avatar) fueron desarrollados por medio de captura de movimiento de un signante del LSM. Kinect fue usado como un sensor 3D para el proceso de captura de movimiento, y DAZ Studio 4 fue usado para su animaci&oacute;n. El RAV multi&#45;usuario fue desarrollado usando HTK y Matlab fue la plataforma de programaci&oacute;n para la Interfaz Gr&aacute;fica de Usuario (GUI). Experimentos con un vocabulario de 199 palabras fueron realizados para validar el sistema. Una precisi&oacute;n del 96.20% fue obtenida para el RAV e interpretaci&oacute;n en vocabulario del LSM de 70 palabras y 2o frases habladas. Las realizaciones del avatar 3D fueron m&aacute;s claras que aquellas de grabaciones de video de un signante humano del LSM.</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><b>Palabras clave:</b> Lenguaje de se&ntilde;as mexicano, reconocimiento autom&aacute;tico de voz, interacci&oacute;n humano&#45;computadora.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><a href="/pdf/cys/v17n4/v17n4a12.pdf" target="_blank">DESCARGAR ART&Iacute;CULO EN FORMATO PDF</a></font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>References</b></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>1. Nuance Communications, Inc. (2012).</b> Dragon Speech Recognition Software. Retrieved from <a href="http://www.nuance.com/dragon/index.htm" target="_blank">http://www.nuance.com/dragon/index.htm</a>.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062801&pid=S1405-5546201300040001200001&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>2. IBM (2012).</b> WebSphere Voice. Retrieved from <a href="http://www&#45;01.ibm.com/software/voice/" target="_blank">http://www&#45;01.ibm.com/software/voice/</a>.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062803&pid=S1405-5546201300040001200002&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>3. Lavie, A., Waibel, A., Levin, L., Finke, M., Gates, D., Gavalda, M., Zeppenfeld, T., &amp; Zhan, P. (1997).</b> JANUS III: Speech&#45;To&#45;Speech Translation In Multiple Languages. <i>IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP&#45;97),</i> Munich, Germany, 1, 99&#45;102.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062805&pid=S1405-5546201300040001200003&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>4. Parker, M., Cunningham, S., Enderby, P., Hawley, M., &amp; Green, P. (2006).</b> Automatic speech recognition and training for severely dysarthric users of assistive technology: The STARDUST project. <i>Clinical Linguistics and Phonetics,</i> 20(2&#45;3), 149&#45;156.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062807&pid=S1405-5546201300040001200004&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>5. Dalby, J. &amp; Kewley&#45;Port, D. (1999).</b> Explicit Pronunciation Training Using Automatic Speech Recognition Technology. <i>Computer&#45;Assisted Language Instruction Consortium (CALICO)</i> <i>Journal,</i> 16(3), 425&#45;445.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062809&pid=S1405-5546201300040001200005&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>6. Rosetta Stone (2012).</b> Rosetta Stone Version 4 TOTALe. Retrieved from <a href="http://www.rosettastone.com/learn&#45;spanish" target="_blank">http://www.rosettastone.com/learn&#45;spanish</a><a href="http://www.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062811&pid=S1405-5546201300040001200006&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref -->rosettastone.com/learn&#45;spanish"></a>.</font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>7. Cox, S., Lincoln, M., Nakisa, M., Wells, M., Tutt, M., &amp; Abbott, S. (2003).</b> The Development and Evaluation of a Speech to Sign Translation System to Assist Transactions. <i>International Journal of Human Computer Interaction,</i> 16(2), 141 &#45;161.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062813&pid=S1405-5546201300040001200007&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>8. San&#45;Segundo, R., Barra, R., D'Haro, L.F., Montero, J.M., C&oacute;rdoba, R., &amp; Ferreiros, J. (2006).</b> A spanish speech to sign language translation system for assisting deaf&#45;mute people. <i>Ninth International Conference on Spoken Language Processing (INTERSPEECH 2006&#45;ICSLP),</i> Pittsburgh, PA, USA, 1399&#45;1402.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062815&pid=S1405-5546201300040001200008&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>9. Baldassarri, S., Cerezo, E., &amp; Royo&#45;Santas, F. (2009).</b> Automatic Translation System to Spanish Sign Language with a Virtual Interpreter. <i>Human&#45;Computer Interaction &#45; INTERACT 2009, Lecture Notes in Computer Science,</i> 5726, 196&#45;199.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062817&pid=S1405-5546201300040001200009&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>10. L&oacute;pez&#45;Colino, F. &amp; Col&aacute;s, J. (2011).</b> The Synthesis of LSE Classifiers: From Representation to Evaluation. <i>Journal of Universal Computer Science,</i> 17(3), 399&#45;425.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062819&pid=S1405-5546201300040001200010&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>11. Mass&oacute;, G. &amp; Badia, T. (2010).</b> Dealing with Sign Language Morphemes in Statistical Machine Translation. <i>4<sup>th</sup> Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies,</i> Valletta, Malta, 154&#45;157.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062821&pid=S1405-5546201300040001200011&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>      <!-- ref --><p align="justify"><font face="verdana" size="2"><b>12. Calvo, M.T. (2004).</b> Diccionario Espa&ntilde;ol &#45; Lengua de Se&ntilde;as Mexicana (DIELSEME): estudio introductorio. Direcci&oacute;n de Educaci&oacute;n Especial: M&eacute;xico.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062823&pid=S1405-5546201300040001200012&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>13. Saldivar&#45;Pi&ntilde;on, L., Chacon&#45;Murguia, M., Sandoval&#45;Rodriguez, R., &amp; Vega&#45;Pineda, J. (2012).</b> Human Sign Recognition for Robot Manipulation. <i>Pattern Recognition, Lecture Notes in Computer Science,</i> 7329, 107&#45;116.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062825&pid=S1405-5546201300040001200013&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>14. Rios, D. &amp; Schaeffer, S. (2012).</b> A Tool for Hand&#45;Sign Recognition. <i>Pattern Recognition, Lecture Notes in Computer Science,</i> 7329, 137&#45;146.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062827&pid=S1405-5546201300040001200014&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>15. Clymer, E., Geigel, J., Behm, G., &amp; Masters, K. (2012).</b> <i>Use of Signing Avatars to Enhance Direct Communication Support for Deaf and Hard&#45;of&#45;Hearing Users.</i> National Technical Institute for the Deaf (NTID), Rochester Institute of Technology, United States.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062829&pid=S1405-5546201300040001200015&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>16. Microsoft Co. (2012).</b> Kinect for Windows. Retrieved from <a href="http://www.microsoft.com/enus/kinectforwindows/" target="_blank">http://www.microsoft.com/enus/kinectforwindows/</a>.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062831&pid=S1405-5546201300040001200016&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>17. Albrecht, I., Haber, J., &amp; Seidel, H.P. (2003).</b> Construction and Animation of Anatomically Based Human Hand Models. <i>2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation,</i> San Diego, CA, USA, 98109.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062833&pid=S1405-5546201300040001200017&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>18. Bretzner, L., Laptev, I., &amp; Lindeberg, T. (2002).</b> Hand gesture recognition using multi&#45;scale colour features, hierarchical models and particle filtering. <i>Fifth IEEE International Conference on Automatic Face and Gesture Recognition,</i> Washington, DC, USA, 423&#45;428.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062835&pid=S1405-5546201300040001200018&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>19. Oikonomidis, I., Kyriazis, N., &amp; Argyros, A.A. (2011).</b> Efficient model&#45;based 3D tracking of hand articulations using Kinect. <i>Proceedings of the British Machine Vision Conference (BMVC 2011),</i> Dundee, UK, (101.1&#45;101.11).    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062837&pid=S1405-5546201300040001200019&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>20. Trigo, T.R. &amp; Pellegrino, S.R. (2010).</b> An analysis of features for hand&#45;gesture classification. <i>17<sup>th</sup> International Conference on Systems, Signals and Image Processing (IWSSIP 2010),</i> Rio de Janeiro, Brazil, 412&#45;415.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062839&pid=S1405-5546201300040001200020&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>21. DAZ Productions (2012).</b> DAZ Studio 4.5. Retrieved from <a href="http://www.daz3d.com/daz&#45;studio&#45;4&#45;pro/" target="_blank">http://www.daz3d.com/daz&#45;studio&#45;4&#45;pro/</a>.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062841&pid=S1405-5546201300040001200021&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>22. Bonilla, G. (2012).</b> <i>Interfax de Voz para Personas con Disartria.</i> Tesis Ingeniero en Computaci&oacute;n, Universidad Tecnol&oacute;gica de la Mixteca (UTM), Huajuapan, Oaxaca, Mexico.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062843&pid=S1405-5546201300040001200022&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>23. Jurafsky, D. &amp; Martin, J.H. (2009).</b> <i>Speech and Language Processing: an introduction to natural language processing, computational linguistics, and speech recognition.</i> Upper Saddle River, N.J.: Pearson Prentice Hall.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062845&pid=S1405-5546201300040001200023&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>24. Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., Moore, G., Odell, J., Ollason, D., Povey, D., Valtchev., V &amp; Woodland, P. (2006).</b> <i>The HTK Book (for HTK Version 3.4).</i> Cambridge University Engineering Department: Cambridge, UK.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062847&pid=S1405-5546201300040001200024&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>25. Leggetter, C.J. &amp; Woodland, P.C. (1995).</b> Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. <i>Computer Speech and Language,</i> 9(2), 171&#45;185.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062849&pid=S1405-5546201300040001200025&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>26. Cu&eacute;tara, J.O. (2004).</b> <i>Fon&eacute;tica de la Ciudad de M&eacute;xico. Aportaciones desde las tecnolog&iacute;as del habla.</i> Maestro en Ling&uuml;istica Aplicada, Universidad Nacional Aut&oacute;noma de M&eacute;xico (UNAM), M&eacute;xico, D.F.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062851&pid=S1405-5546201300040001200026&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>27. Pineda, L.A., Villase&ntilde;or, L., Cu&eacute;tara, J., Castellanos, H., &amp; L&oacute;pez, I. (2004).</b> DIMEx100: A new phonetic and speech corpus for Mexican Spanish. <i>Advances in Artificial Intelligence (IBERAMIA 2004), Lecture Notes in Computer Science,</i> 3315, 974&#45;983.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062853&pid=S1405-5546201300040001200027&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>28. Pineda, L.A., Castellanos, H., Cu&eacute;tara, J., Galescu, L., Ju&aacute;rez, J., Llisterri, J., P&eacute;rez, P., &amp;</b> <b>Villase&ntilde;or, L. (2010).</b> The corpus dimex100: Transcription and evaluation. <i>Language Resources and Evaluation,</i> 44(4), 347&#45;370.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062855&pid=S1405-5546201300040001200028&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>29. Trujillo&#45;Romero, F. &amp; Caballero&#45;Morales, S.O. (2012).</b> Towards the Development of a Mexican Speech&#45;to&#45;Sign&#45;Language Translator for the Deaf Community. <i>Acta Universitaria,</i> 22(NE&#45;1), 83&#45;89.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062857&pid=S1405-5546201300040001200029&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>30. Sjolander, K. &amp; Beskow, J. (2006).</b> Wavesurfer. Retrieved from <a href="http://www.speech.kth.se/wavesurfer/" target="_blank">http://www.speech.kth.se/wavesurfer/</a>.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062859&pid=S1405-5546201300040001200030&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>31. Rabiner, L. (1989).</b> A tutorial on hidden Markov models and selected applications in speech recognition. <i>Proceedings of the IEEE,</i> 77(2), 257&#45;286.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062861&pid=S1405-5546201300040001200031&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>32. National Institute of Standards and Technology (NIST) (s.f.).</b> The History of Automatic Speech Recognition Evaluations at NIST. Retrieved from <a href="http://www.itl.nist.gov/iad/mig/publications/ASRhistory/" target="_blank">http://www.itl.nist.gov/iad/mig/publications/ASRhistory/</a>.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062863&pid=S1405-5546201300040001200032&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>      ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="">
<collab>Nuance Communications, Inc</collab>
<source><![CDATA[Dragon Speech Recognition Software]]></source>
<year>2012</year>
</nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="">
<collab>IBM</collab>
<source><![CDATA[WebSphere Voice]]></source>
<year>2012</year>
</nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lavie]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Waibel]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Levin]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Finke]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Gates]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Gavalda]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Zeppenfeld]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
<name>
<surname><![CDATA[Zhan]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
</person-group>
<source><![CDATA[JANUS III: Speech-To-Speech Translation In Multiple Languages. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-97)]]></source>
<year>1997</year>
<volume>1</volume>
<page-range>99-102</page-range><publisher-loc><![CDATA[Munich ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Parker]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Cunningham]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Enderby]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
<name>
<surname><![CDATA[Hawley]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Green]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automatic speech recognition and training for severely dysarthric users of assistive technology: The STARDUST project]]></article-title>
<source><![CDATA[Clinical Linguistics and Phonetics]]></source>
<year>2006</year>
<volume>20</volume>
<numero>2-3</numero>
<issue>2-3</issue>
<page-range>149-156</page-range></nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dalby]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Kewley-Port]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Explicit Pronunciation Training Using Automatic Speech Recognition Technology]]></article-title>
<source><![CDATA[Computer-Assisted Language Instruction Consortium (CALICO) Journal]]></source>
<year>1999</year>
<volume>16</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>425-445</page-range></nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Stone]]></surname>
<given-names><![CDATA[Rosetta]]></given-names>
</name>
</person-group>
<source><![CDATA[Rosetta Stone Version 4 TOTALe]]></source>
<year>2012</year>
</nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cox]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Lincoln]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Nakisa]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Wells]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Tutt]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Abbott]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[The Development and Evaluation of a Speech to Sign Translation System to Assist Transactions]]></article-title>
<source><![CDATA[International Journal of Human Computer Interaction]]></source>
<year>2003</year>
<volume>16</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>141 -161</page-range></nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[San-Segundo]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Barra]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[D'Haro]]></surname>
<given-names><![CDATA[L.F.]]></given-names>
</name>
<name>
<surname><![CDATA[Montero]]></surname>
<given-names><![CDATA[J.M.]]></given-names>
</name>
<name>
<surname><![CDATA[Córdoba]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Ferreiros]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<source><![CDATA[A spanish speech to sign language translation system for assisting deaf-mute people. Ninth International Conference on Spoken Language Processing (INTERSPEECH 2006-ICSLP)]]></source>
<year>2006</year>
<page-range>1399-1402</page-range><publisher-loc><![CDATA[Pittsburgh^ePA PA]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Baldassarri]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Cerezo]]></surname>
<given-names><![CDATA[E.]]></given-names>
</name>
<name>
<surname><![CDATA[Royo-Santas]]></surname>
<given-names><![CDATA[F.]]></given-names>
</name>
</person-group>
<source><![CDATA[Automatic Translation System to Spanish Sign Language with a Virtual Interpreter. Human-Computer Interaction - INTERACT 2009]]></source>
<year>2009</year>
<volume>5726</volume>
<page-range>196-199</page-range></nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[López-Colino]]></surname>
<given-names><![CDATA[F.]]></given-names>
</name>
<name>
<surname><![CDATA[Colás]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[The Synthesis of LSE Classifiers: From Representation to Evaluation]]></article-title>
<source><![CDATA[Journal of Universal Computer Science]]></source>
<year>2011</year>
<volume>17</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>399-425</page-range></nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Massó]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Badia]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
</person-group>
<source><![CDATA[Dealing with Sign Language Morphemes in Statistical Machine Translation. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies]]></source>
<year>2010</year>
<page-range>154-157</page-range><publisher-loc><![CDATA[Valletta ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Calvo]]></surname>
<given-names><![CDATA[M.T.]]></given-names>
</name>
</person-group>
<source><![CDATA[Diccionario Español - Lengua de Señas Mexicana (DIELSEME): estudio introductorio]]></source>
<year>2004</year>
<publisher-name><![CDATA[Dirección de Educación Especial]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Saldivar-Piñon]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Chacon-Murguia]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Sandoval-Rodriguez]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Vega-Pineda]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<source><![CDATA[Human Sign Recognition for Robot Manipulation. Pattern Recognition]]></source>
<year>2012</year>
<volume>7329</volume>
<page-range>107-116</page-range></nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Rios]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Schaeffer]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
</person-group>
<source><![CDATA[A Tool for Hand-Sign Recognition. Pattern Recognition]]></source>
<year>2012</year>
<volume>7329</volume>
<page-range>137-146</page-range></nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Clymer]]></surname>
<given-names><![CDATA[E.]]></given-names>
</name>
<name>
<surname><![CDATA[Geigel]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Behm]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Masters]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
</person-group>
<source><![CDATA[Use of Signing Avatars to Enhance Direct Communication Support for Deaf and Hard-of-Hearing Users]]></source>
<year>2012</year>
<publisher-name><![CDATA[National Technical Institute for the Deaf (NTID)Rochester Institute of Technology]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="">
<collab>Microsoft Co.</collab>
<source><![CDATA[Kinect for Windows]]></source>
<year>2012</year>
</nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Albrecht]]></surname>
<given-names><![CDATA[I.]]></given-names>
</name>
<name>
<surname><![CDATA[Haber]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Seidel]]></surname>
<given-names><![CDATA[H.P.]]></given-names>
</name>
</person-group>
<source><![CDATA[Construction and Animation of Anatomically Based Human Hand Models. 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation]]></source>
<year>2003</year>
<page-range>98109</page-range><publisher-loc><![CDATA[San Diego^eCA CA]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bretzner]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Laptev]]></surname>
<given-names><![CDATA[I.]]></given-names>
</name>
<name>
<surname><![CDATA[Lindeberg]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
</person-group>
<source><![CDATA[Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering. Fifth IEEE International Conference on Automatic Face and Gesture Recognition]]></source>
<year>2002</year>
<page-range>423-428</page-range><publisher-loc><![CDATA[Washington^eDC DC]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Oikonomidis]]></surname>
<given-names><![CDATA[I.]]></given-names>
</name>
<name>
<surname><![CDATA[Kyriazis]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
<name>
<surname><![CDATA[Argyros]]></surname>
<given-names><![CDATA[A.A.]]></given-names>
</name>
</person-group>
<source><![CDATA[Efficient model-based 3D tracking of hand articulations using Kinect. Proceedings of the British Machine Vision Conference (BMVC 2011)]]></source>
<year>2011</year>
<page-range>101.1-101.11</page-range><publisher-loc><![CDATA[Dundee ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B20">
<label>20</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Trigo]]></surname>
<given-names><![CDATA[T.R.]]></given-names>
</name>
<name>
<surname><![CDATA[Pellegrino]]></surname>
<given-names><![CDATA[S.R.]]></given-names>
</name>
</person-group>
<source><![CDATA[An analysis of features for hand-gesture classification. 17th International Conference on Systems, Signals and Image Processing (IWSSIP 2010)]]></source>
<year>2010</year>
<page-range>412-415</page-range><publisher-loc><![CDATA[Rio de Janeiro ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B21">
<label>21</label><nlm-citation citation-type="">
<collab>DAZ Productions</collab>
<source><![CDATA[DAZ Studio 4.5]]></source>
<year>2012</year>
</nlm-citation>
</ref>
<ref id="B22">
<label>22</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bonilla]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
</person-group>
<source><![CDATA[Interfax de Voz para Personas con Disartria]]></source>
<year>2012</year>
</nlm-citation>
</ref>
<ref id="B23">
<label>23</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Jurafsky]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Martin]]></surname>
<given-names><![CDATA[J.H.]]></given-names>
</name>
</person-group>
<source><![CDATA[Speech and Language Processing: an introduction to natural language processing, computational linguistics, and speech recognition]]></source>
<year>2009</year>
<publisher-loc><![CDATA[Upper Saddle River^eN.J. N.J.]]></publisher-loc>
<publisher-name><![CDATA[Pearson Prentice Hall]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B24">
<label>24</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Young]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Evermann]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Gales]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Hain]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
<name>
<surname><![CDATA[Kershaw]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Liu]]></surname>
<given-names><![CDATA[X.]]></given-names>
</name>
<name>
<surname><![CDATA[Moore]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Odell]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Ollason]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Povey]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Valtchev.]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
<name>
<surname><![CDATA[Woodland]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
</person-group>
<source><![CDATA[The HTK Book (for HTK Version 3.4)]]></source>
<year>2006</year>
<publisher-loc><![CDATA[Cambridge ]]></publisher-loc>
<publisher-name><![CDATA[Cambridge University Engineering Department]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B25">
<label>25</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Leggetter]]></surname>
<given-names><![CDATA[C.J.]]></given-names>
</name>
<name>
<surname><![CDATA[Woodland]]></surname>
<given-names><![CDATA[P.C.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models]]></article-title>
<source><![CDATA[Computer Speech and Language]]></source>
<year>1995</year>
<volume>9</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>171-185</page-range></nlm-citation>
</ref>
<ref id="B26">
<label>26</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cuétara]]></surname>
<given-names><![CDATA[J.O.]]></given-names>
</name>
</person-group>
<source><![CDATA[Fonética de la Ciudad de México. Aportaciones desde las tecnologías del habla]]></source>
<year>2004</year>
</nlm-citation>
</ref>
<ref id="B27">
<label>27</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Pineda]]></surname>
<given-names><![CDATA[L.A.]]></given-names>
</name>
<name>
<surname><![CDATA[Villaseñor]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Cuétara]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Castellanos]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[López]]></surname>
<given-names><![CDATA[I.]]></given-names>
</name>
</person-group>
<source><![CDATA[DIMEx100: A new phonetic and speech corpus for Mexican Spanish. Advances in Artificial Intelligence (IBERAMIA 2004)]]></source>
<year>2004</year>
<volume>3315</volume>
<page-range>974-983</page-range></nlm-citation>
</ref>
<ref id="B28">
<label>28</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Pineda]]></surname>
<given-names><![CDATA[L.A.]]></given-names>
</name>
<name>
<surname><![CDATA[Castellanos]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Cuétara]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Galescu]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Juárez]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Llisterri]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Pérez]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
<name>
<surname><![CDATA[Villaseñor]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[The corpus dimex100: Transcription and evaluation]]></article-title>
<source><![CDATA[Language Resources and Evaluation]]></source>
<year>2010</year>
<volume>44</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>347-370</page-range></nlm-citation>
</ref>
<ref id="B29">
<label>29</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Trujillo-Romero]]></surname>
<given-names><![CDATA[F.]]></given-names>
</name>
<name>
<surname><![CDATA[Caballero-Morales]]></surname>
<given-names><![CDATA[S.O.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Towards the Development of a Mexican Speech-to-Sign-Language Translator for the Deaf Community]]></article-title>
<source><![CDATA[Acta Universitaria]]></source>
<year>2012</year>
<volume>22</volume>
<numero>NE-1</numero>
<issue>NE-1</issue>
<page-range>83-89</page-range></nlm-citation>
</ref>
<ref id="B30">
<label>30</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Sjolander]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
<name>
<surname><![CDATA[Beskow]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<source><![CDATA[Wavesurfer]]></source>
<year>2006</year>
</nlm-citation>
</ref>
<ref id="B31">
<label>31</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Rabiner]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A tutorial on hidden Markov models and selected applications in speech recognition]]></article-title>
<source><![CDATA[Proceedings of the IEEE]]></source>
<year>1989</year>
<volume>77</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>257-286</page-range></nlm-citation>
</ref>
<ref id="B32">
<label>32</label><nlm-citation citation-type="">
<collab>National Institute of Standards and Technology</collab>
<source><![CDATA[The History of Automatic Speech Recognition Evaluations at NIST]]></source>
<year></year>
</nlm-citation>
</ref>
</ref-list>
</back>
</article>
