<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0188-9532</journal-id>
<journal-title><![CDATA[Revista mexicana de ingeniería biomédica]]></journal-title>
<abbrev-journal-title><![CDATA[Rev. mex. ing. bioméd]]></abbrev-journal-title>
<issn>0188-9532</issn>
<publisher>
<publisher-name><![CDATA[Sociedad Mexicana de Ingeniería Biomédica]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0188-95322024000200001</article-id>
<article-id pub-id-type="doi">10.17488/rmib.45.2.1</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[Gammatone-Frequency Cepstral Coefficients Based Fear Emotion Level Recognition System]]></article-title>
<article-title xml:lang="es"><![CDATA[Sistema de Reconocimiento de Nivel de Emoción Basado en Coeficientes Cepstrales de Frecuencia Gammatone]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Prasetio]]></surname>
<given-names><![CDATA[Barlian Henryranu]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Hazmar]]></surname>
<given-names><![CDATA[La Ode Adriyan]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Syauqy]]></surname>
<given-names><![CDATA[Dahnial]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Widasari]]></surname>
<given-names><![CDATA[Edita Rosana]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
</contrib-group>
<aff id="Af1">
<institution><![CDATA[,Universitas Brawijaya Faculty of Computer Science ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
<country>Indonesia</country>
</aff>
<aff id="Af2">
<institution><![CDATA[,Universitas Brawijaya Computer Engineering ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
<country>Indonesia</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>08</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>08</month>
<year>2024</year>
</pub-date>
<volume>45</volume>
<numero>2</numero>
<fpage>6</fpage>
<lpage>22</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_arttext&amp;pid=S0188-95322024000200001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_abstract&amp;pid=S0188-95322024000200001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_pdf&amp;pid=S0188-95322024000200001&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[Abstract Emotions represent affective states that induce alterations in behavior and interactions within one's environment. An avenue for discerning human emotions lies in the realm of speech analysis. Empirical evidence indicates that 1.6 million Indonesian teenagers grapple with mental anxiety disorders, characterized by sensations of fear or ambiguous vigilance. This work endeavors to devise a tool for discerning an individual's emotional state through voice processing, focusing particularly on fear emotions stratified into three levels of intensity: low, medium, and high. The proposed system employs Gammatone-Frequency Cepstral Coefficients (GFCC) for feature extraction, leveraging the efficacy of its gamma filter in reducing noise. Furthermore, a Random Forest (RF) Classifier is integrated to facilitate the recognition of fear's emotional intensity in speech signals. The system is deployed on a Raspberry Pi 4B and establishes a Bluetooth connection using the RFCOMM communication protocol to an Android application, presenting the classification results. The outcomes reveal that the Signal-to-Noise Reduction achieved through GFCC extraction surpasses that of Mel-Frequency Cepstral Coefficients (MFCC). In terms of accuracy, the implemented recognition system for fear emotion levels, employing GFCC extraction and Random Forest Classifier, attains a commendable accuracy of 73.33 %.]]></p></abstract>
<abstract abstract-type="short" xml:lang="es"><p><![CDATA[Resumen Las emociones representan estados afectivos que inducen alteraciones en el comportamiento e interacciones dentro del entorno de un individuo. Un enfoque para discernir las emociones humanas se encuentra en el análisis del habla. La evidencia empírica indica que 1.6 millones de adolescentes indonesios enfrentan trastornos de ansiedad mental, caracterizados por sensaciones de miedo o vigilancia ambigua. Esta investigación se propone diseñar una herramienta para discernir el estado emocional de una persona mediante el procesamiento de la voz, centrándose especialmente en las emociones de miedo estratificadas en tres niveles de intensidad: bajo, medio y alto. La metodología propuesta emplea los Coeficientes Cepstrales de Frecuencia Gammatone (GFCC) para la extracción de características, aprovechando la eficacia de su filtro gamma para combatir el ruido. Además, se incorpora un Clasificador Random Forest (RF) para facilitar el reconocimiento de la intensidad emocional del miedo en las señales de voz. El sistema se implementa en una Raspberry Pi 4B y establece una conexión Bluetooth utilizando el protocolo de comunicación RFCOMM con una aplicación Android, presentando los resultados de la clasificación. Los resultados revelan que la Reducción de Señal a Ruido lograda mediante la extracción de GFCC supera a la de los Coeficientes Cepstrales de Frecuencia Mel (MFCC). En términos de precisión, el sistema de reconocimiento implementado para los niveles de emoción de miedo, utilizando la extracción de GFCC y el Clasificador Random Forest, alcanza una precisión destacada del 73.33 %]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[fear emotion]]></kwd>
<kwd lng="en"><![CDATA[gammatone-frequency cepstral coefficients]]></kwd>
<kwd lng="en"><![CDATA[Mel-frequency cepstral coefficients]]></kwd>
<kwd lng="en"><![CDATA[signal-tonoise reduction]]></kwd>
<kwd lng="en"><![CDATA[speech sound]]></kwd>
<kwd lng="es"><![CDATA[emoción de miedo]]></kwd>
<kwd lng="es"><![CDATA[coeficientes cepstrales de frecuencia gammatone]]></kwd>
<kwd lng="es"><![CDATA[coeficientes cepstrales de frecuencia Mel]]></kwd>
<kwd lng="es"><![CDATA[reducción de señal a ruido]]></kwd>
<kwd lng="es"><![CDATA[sonido del habla]]></kwd>
</kwd-group>
</article-meta>
</front><back>
<ref-list>
<ref id="B1">
<label>[1]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gupta]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Bharti]]></surname>
<given-names><![CDATA[S. S.]]></given-names>
</name>
<name>
<surname><![CDATA[Agarwal]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Gender-based speaker recognition from speech signals using GMM model]]></article-title>
<source><![CDATA[Mod. Phys. Lett. B]]></source>
<year>2019</year>
<volume>33</volume>
<numero>35</numero>
<issue>35</issue>
</nlm-citation>
</ref>
<ref id="B2">
<label>[2]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Ota&#353;evi&#263;]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Ota&#353;evi&#263;]]></surname>
<given-names><![CDATA[B.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Voice-based identification and contribution to the efficiency of criminal proceedings]]></article-title>
<source><![CDATA[J. Crim. Crim. Law]]></source>
<year>2021</year>
<volume>59</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>61-72</page-range></nlm-citation>
</ref>
<ref id="B3">
<label>[3]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kotz]]></surname>
<given-names><![CDATA[S. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Dengler]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Wittfoth]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Valence-specific conflict moderation in the dorso-medial PFC and the caudate head in emotional speech]]></article-title>
<source><![CDATA[Soc. Cogn. Affect Neurosci.]]></source>
<year>2015</year>
<volume>10</volume>
<numero>2</numero>
<issue>2</issue>
</nlm-citation>
</ref>
<ref id="B4">
<label>[4]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gomez]]></surname>
<given-names><![CDATA[S. J.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Self-Management Skills of Management Graduates]]></article-title>
<source><![CDATA[Int. J. Res. Manag. Bus. Stud.]]></source>
<year>2017</year>
<volume>4</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>40-4</page-range></nlm-citation>
</ref>
<ref id="B5">
<label>[5]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Saddiqui]]></surname>
<given-names><![CDATA[S. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Jawad]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Naz]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Niazi]]></surname>
<given-names><![CDATA[G. S. Khan]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Emotional intelligence and managerial effectiveness]]></article-title>
<source><![CDATA[RIC]]></source>
<year>2018</year>
<volume>4</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>99-130</page-range></nlm-citation>
</ref>
<ref id="B6">
<label>[6]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Erskine]]></surname>
<given-names><![CDATA[H. E.]]></given-names>
</name>
<name>
<surname><![CDATA[Blondell]]></surname>
<given-names><![CDATA[S. J.]]></given-names>
</name>
<name>
<surname><![CDATA[Enright]]></surname>
<given-names><![CDATA[M. E.]]></given-names>
</name>
<name>
<surname><![CDATA[Shadid]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Measuring the Prevalence of Mental Disorders in Adolescents in Kenya, Indonesia, and Vietnam: Study Protocol for the National Adolescent Mental Health Surveys]]></article-title>
<source><![CDATA[J. Adolesc. Health]]></source>
<year>2023</year>
<volume>72</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>S71-8</page-range></nlm-citation>
</ref>
<ref id="B7">
<label>[7]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cherry]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[What Are Emotions and the Types of Emotional Responses?]]></article-title>
<source><![CDATA[Verywell Health]]></source>
<year></year>
</nlm-citation>
</ref>
<ref id="B8">
<label>[8]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Sharma]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Mamata]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Psychological Impacts, Hand Hygeine Practices &amp; and Its Correlates in View of Covid-19 among Health Care Professionals in Northern States of India]]></article-title>
<source><![CDATA[Indian J. Forensic Med. Toxicol.]]></source>
<year>2021</year>
<volume>15</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>3691-8</page-range></nlm-citation>
</ref>
<ref id="B9">
<label>[9]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mahar]]></surname>
<given-names><![CDATA[S. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Mahar]]></surname>
<given-names><![CDATA[M. H.]]></given-names>
</name>
<name>
<surname><![CDATA[Mahar]]></surname>
<given-names><![CDATA[J. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Masud]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Ahmad]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Jhanhi]]></surname>
<given-names><![CDATA[N. Z.]]></given-names>
</name>
<name>
<surname><![CDATA[Razzaq]]></surname>
<given-names><![CDATA[M. A.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Superposition of functional contours based prosodic feature extraction for speech processing]]></article-title>
<source><![CDATA[Intell. Autom. Soft Comput.]]></source>
<year>2021</year>
<volume>29</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>183-97</page-range></nlm-citation>
</ref>
<ref id="B10">
<label>[10]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Sondhi]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Khan]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Vijay]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Salhan]]></surname>
<given-names><![CDATA[A. K.]]></given-names>
</name>
<name>
<surname><![CDATA[Chouhan]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Acoustic analysis of speech under stress]]></article-title>
<source><![CDATA[Int. J. Bioinform. Res. Appl.]]></source>
<year>2015</year>
<volume>11</volume>
<numero>5</numero>
<issue>5</issue>
<page-range>417-32</page-range></nlm-citation>
</ref>
<ref id="B11">
<label>[11]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Likitha]]></surname>
<given-names><![CDATA[M. S.]]></given-names>
</name>
<name>
<surname><![CDATA[Gupta]]></surname>
<given-names><![CDATA[S. R. R.]]></given-names>
</name>
<name>
<surname><![CDATA[Hasitha]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
<name>
<surname><![CDATA[Raju]]></surname>
<given-names><![CDATA[A. U.]]></given-names>
</name>
</person-group>
<source><![CDATA[Speech based human emotion recognition using MFCC]]></source>
<year>2017</year>
<conf-name><![CDATA[ International Conference on Wireless Communications, Signal Processing and Networking]]></conf-name>
<conf-loc>Chennai, India </conf-loc>
<page-range>2257-60</page-range></nlm-citation>
</ref>
<ref id="B12">
<label>[12]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Jeevan]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Dhingra]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Hanmandlu]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Panigrahi]]></surname>
<given-names><![CDATA[B. K.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Robust speaker verification using GFCC based i-vectors]]></article-title>
<source><![CDATA[Proceedings of the International Conference on Signal, Networks, Computing, and Systems. Lecture Notes in Electrical Engineering]]></source>
<year>2017</year>
<volume>395</volume>
<page-range>85-91</page-range><publisher-loc><![CDATA[New Delhi, India ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B13">
<label>[13]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Wang]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[The application of Gammatone frequency cepstral coefficients for forensic voice comparison under noisy conditions]]></article-title>
<source><![CDATA[Aust. J. Forensic Sci.]]></source>
<year>2020</year>
<volume>52</volume>
<numero>5</numero>
<issue>5</issue>
<page-range>553-68</page-range></nlm-citation>
</ref>
<ref id="B14">
<label>[14]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bharti]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Kukana]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
</person-group>
<source><![CDATA[A Hybrid Machine Learning Model for Emotion Recognition from Speech Signals]]></source>
<year>2020</year>
<conf-name><![CDATA[ International Conference on Smart Electronics and Communication]]></conf-name>
<conf-loc>Trichy, India </conf-loc>
<page-range>491-6</page-range></nlm-citation>
</ref>
<ref id="B15">
<label>[15]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Patni]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Jagtap]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Bhoyar]]></surname>
<given-names><![CDATA[V.]]></given-names>
</name>
<name>
<surname><![CDATA[Gupta]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<source><![CDATA[Speech Emotion Recognition using MFCC, GFCC, Chromagram and RMSE features]]></source>
<year>2021</year>
<conf-name><![CDATA[ 8International Conference on Signal Processing and Integrated Networks]]></conf-name>
<conf-loc>Noida, India </conf-loc>
<page-range>892-7</page-range></nlm-citation>
</ref>
<ref id="B16">
<label>[16]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Choudhary]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Sadhya]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Patel]]></surname>
<given-names><![CDATA[V.]]></given-names>
</name>
</person-group>
<source><![CDATA[Automatic Speaker Verification using Gammatone Frequency Cepstral Coefficients]]></source>
<year>2021</year>
<conf-name><![CDATA[ 8International Conference on Signal Processing and Integrated Networks]]></conf-name>
<conf-loc>Noida, India </conf-loc>
<page-range>424-8</page-range></nlm-citation>
</ref>
<ref id="B17">
<label>[17]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Zheng]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Li]]></surname>
<given-names><![CDATA[Q.]]></given-names>
</name>
<name>
<surname><![CDATA[Ban]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Liu]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
</person-group>
<source><![CDATA[Speech emotion recognition based on convolution neural network combined with random forest]]></source>
<year>2018</year>
<conf-name><![CDATA[ Chinese Control And Decision Conference]]></conf-name>
<conf-loc>Shenyang, China </conf-loc>
<page-range>4143-7</page-range></nlm-citation>
</ref>
<ref id="B18">
<label>[18]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hamsa]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Shahin]]></surname>
<given-names><![CDATA[I.]]></given-names>
</name>
<name>
<surname><![CDATA[Iraqi]]></surname>
<given-names><![CDATA[Y.]]></given-names>
</name>
<name>
<surname><![CDATA[Werghi]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Emotion Recognition from Speech Using Wavelet Packet Transform Cochlear Filter Bank and Random Forest Classifier]]></article-title>
<source><![CDATA[IEEE Access]]></source>
<year>2020</year>
<volume>8</volume>
<page-range>96994-7006</page-range></nlm-citation>
</ref>
<ref id="B19">
<label>[19]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cuncic]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Amygdala Hijack and the Fight or Flight Response]]></article-title>
<source><![CDATA[Very Well Mind]]></source>
<year></year>
</nlm-citation>
</ref>
<ref id="B20">
<label>[20]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Foa]]></surname>
<given-names><![CDATA[E. B.]]></given-names>
</name>
<name>
<surname><![CDATA[Kozak]]></surname>
<given-names><![CDATA[M. J.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Emotional Processing of Fear. Exposure to Corrective Information]]></article-title>
<source><![CDATA[Psychol. Bull.]]></source>
<year>1986</year>
<volume>99</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>20-35</page-range></nlm-citation>
</ref>
<ref id="B21">
<label>[21]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Qaisar]]></surname>
<given-names><![CDATA[S. M.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Isolated speech recognition and its transformation in visual signs]]></article-title>
<source><![CDATA[J. Electr. Eng. Technol.]]></source>
<year>2019</year>
<volume>14</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>955-64</page-range></nlm-citation>
</ref>
<ref id="B22">
<label>[22]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lokesh]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Devi]]></surname>
<given-names><![CDATA[M. R.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Speech recognition system using enhanced mel frequency cepstral coefficient with windowing and framing method]]></article-title>
<source><![CDATA[Cluster Comput.]]></source>
<year>2019</year>
<volume>22</volume>
<page-range>11669-79</page-range></nlm-citation>
</ref>
<ref id="B23">
<label>[23]</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Schmidt]]></surname>
<given-names><![CDATA[J. D.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Simple Computations Using Fourier Transforms]]></article-title>
<source><![CDATA[Numerical Simulation of Optical Wave Propagation with Examples in MATLAB]]></source>
<year>2010</year>
<publisher-loc><![CDATA[Bellingham, WA, USA ]]></publisher-loc>
<publisher-name><![CDATA[SPIE Press]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B24">
<label>[24]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Krobba]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Debyeche]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Selouani]]></surname>
<given-names><![CDATA[S. A.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Mixture linear prediction Gammatone Cepstral features for robust speaker verification under transmission channel noise]]></article-title>
<source><![CDATA[Multimed. Tools Appl.]]></source>
<year>2020</year>
<volume>79</volume>
<numero>25-26</numero>
<issue>25-26</issue>
<page-range>18679-93</page-range></nlm-citation>
</ref>
<ref id="B25">
<label>[25]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Revathi]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Sasikaladevi]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
<name>
<surname><![CDATA[Nagakrishnan]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Jeyalakshmi]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Robust emotion recognition from speech: Gamma tone features and models]]></article-title>
<source><![CDATA[Int. J. Speech Technol.]]></source>
<year>2018</year>
<volume>21</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>723-39</page-range></nlm-citation>
</ref>
<ref id="B26">
<label>[26]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kumaran]]></surname>
<given-names><![CDATA[U.]]></given-names>
</name>
<name>
<surname><![CDATA[Rammohan]]></surname>
<given-names><![CDATA[S. Radha]]></given-names>
</name>
<name>
<surname><![CDATA[Nagarajan]]></surname>
<given-names><![CDATA[S. M.]]></given-names>
</name>
<name>
<surname><![CDATA[Prathik]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Fusion of mel and gammatone frequency cepstral coefficients for speech emotion recognition using deep C-RNN]]></article-title>
<source><![CDATA[Int. J. Speech Technol.]]></source>
<year>2021</year>
<volume>24</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>303-14</page-range></nlm-citation>
</ref>
<ref id="B27">
<label>[27]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Rhee]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Kang]]></surname>
<given-names><![CDATA[M. G.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Discrete cosine transform based regularized high-resolution image reconstruction algorithm]]></article-title>
<source><![CDATA[Opt. Eng.]]></source>
<year>1999</year>
<volume>38</volume>
<numero>8</numero>
<issue>8</issue>
<page-range>1348-56</page-range></nlm-citation>
</ref>
<ref id="B28">
<label>[28]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Subudhi]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Dash]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Sabut]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Automated segmentation and classification of brain stroke using expectation-maximization and random forest classifier]]></article-title>
<source><![CDATA[Biocybern. Biomed. Eng.]]></source>
<year>2020</year>
<volume>40</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>277-89</page-range></nlm-citation>
</ref>
<ref id="B29">
<label>[29]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Phan]]></surname>
<given-names><![CDATA[T. N.]]></given-names>
</name>
<name>
<surname><![CDATA[Kuch]]></surname>
<given-names><![CDATA[V.]]></given-names>
</name>
<name>
<surname><![CDATA[Lehnert]]></surname>
<given-names><![CDATA[L. W.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Land cover classification using google earth engine and random forest classifier-the role of image composition]]></article-title>
<source><![CDATA[Remote Sens.]]></source>
<year>2020</year>
<volume>12</volume>
<numero>15</numero>
<issue>15</issue>
</nlm-citation>
</ref>
<ref id="B30">
<label>[30]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Adiono]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
<name>
<surname><![CDATA[Anindya]]></surname>
<given-names><![CDATA[S. F.]]></given-names>
</name>
<name>
<surname><![CDATA[Fuada]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Afifah]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
<name>
<surname><![CDATA[Purwanda]]></surname>
<given-names><![CDATA[I. G.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Efficient Android Software Development Using MIT App Inventor 2 for Bluetooth-Based Smart Home]]></article-title>
<source><![CDATA[Wireless Pers. Commun.]]></source>
<year>2019</year>
<volume>105</volume>
<page-range>233-56</page-range></nlm-citation>
</ref>
<ref id="B31">
<label>[31]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cao]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Cooper]]></surname>
<given-names><![CDATA[D. G.]]></given-names>
</name>
<name>
<surname><![CDATA[Keutmann]]></surname>
<given-names><![CDATA[M. K.]]></given-names>
</name>
<name>
<surname><![CDATA[Gur]]></surname>
<given-names><![CDATA[R. C.]]></given-names>
</name>
<name>
<surname><![CDATA[Nenkova]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Verma]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset]]></article-title>
<source><![CDATA[IEEE Trans. Affect. Comput.]]></source>
<year>2014</year>
<volume>5</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>377-90</page-range></nlm-citation>
</ref>
<ref id="B32">
<label>[32]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mishra]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Patil]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Karkhanis]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
<name>
<surname><![CDATA[Gaikar]]></surname>
<given-names><![CDATA[V.]]></given-names>
</name>
<name>
<surname><![CDATA[Wani]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
</person-group>
<source><![CDATA[Real time emotion detection from speech using Raspberry Pi 3]]></source>
<year>2017</year>
<conf-name><![CDATA[ International Conference on Wireless Communications, Signal Processing and Networking]]></conf-name>
<conf-loc>Chennai, India </conf-loc>
<page-range>2300-3</page-range></nlm-citation>
</ref>
<ref id="B33">
<label>[33]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Alshamsi]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Kepuska]]></surname>
<given-names><![CDATA[V.]]></given-names>
</name>
<name>
<surname><![CDATA[Alshamsi]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Meng]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
</person-group>
<source><![CDATA[Automated Speech Emotion Recognition on Smart Phones]]></source>
<year>2018</year>
<conf-name><![CDATA[ 9Ubiquitous Computing, Electronics and Mobile Communication Conference]]></conf-name>
<conf-loc>New York, NY, USA </conf-loc>
<page-range>44-50</page-range></nlm-citation>
</ref>
<ref id="B34">
<label>[34]</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chebbi]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Jebara]]></surname>
<given-names><![CDATA[S. Ben]]></given-names>
</name>
</person-group>
<source><![CDATA[On the Selection of Relevant Features for Fear Emotion Detection from Speech]]></source>
<year>2018</year>
<conf-name><![CDATA[ 9International Symposium on Signal, Image, Video and Communications]]></conf-name>
<conf-loc>Rabat, Morocco </conf-loc>
<page-range>82-6</page-range></nlm-citation>
</ref>
<ref id="B35">
<label>[35]</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Clavel]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
<name>
<surname><![CDATA[Vasilescu]]></surname>
<given-names><![CDATA[I.]]></given-names>
</name>
<name>
<surname><![CDATA[Devillers]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Richard]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Ehrette]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Fear-type emotion recognition for future audio-based surveillance systems]]></article-title>
<source><![CDATA[Speech Commun.]]></source>
<year>2008</year>
<volume>50</volume>
<numero>6</numero>
<issue>6</issue>
<page-range>487-503</page-range></nlm-citation>
</ref>
</ref-list>
</back>
</article>
