<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>1405-5546</journal-id>
<journal-title><![CDATA[Computación y Sistemas]]></journal-title>
<abbrev-journal-title><![CDATA[Comp. y Sist.]]></abbrev-journal-title>
<issn>1405-5546</issn>
<publisher>
<publisher-name><![CDATA[Instituto Politécnico Nacional, Centro de Investigación en Computación]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S1405-55462013000400011</article-id>
<title-group>
<article-title xml:lang="es"><![CDATA[Tratamiento del desbalance en problemas con múltiples clases con ECOC]]></article-title>
<article-title xml:lang="en"><![CDATA[Handling the Multi-Class Imbalance Problem using ECOC]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Valdovinos Rosas]]></surname>
<given-names><![CDATA[Rosa María]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Abad Sánchez]]></surname>
<given-names><![CDATA[Rosalinda]]></given-names>
</name>
<xref ref-type="aff" rid="A02"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Alejo Eleuterio]]></surname>
<given-names><![CDATA[Roberto]]></given-names>
</name>
<xref ref-type="aff" rid="A02"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Herrera Arteaga]]></surname>
<given-names><![CDATA[Edgar]]></given-names>
</name>
<xref ref-type="aff" rid="A03"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Trueba Espinosa]]></surname>
<given-names><![CDATA[Adrián]]></given-names>
</name>
<xref ref-type="aff" rid="A04"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Universidad Autónoma del Estado de México Facultad de Ingeniería ]]></institution>
<addr-line><![CDATA[Toluca ]]></addr-line>
<country>México</country>
</aff>
<aff id="A02">
<institution><![CDATA[,Tecnológico de Estudios Superiores de Jocotitlán  ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
<country>México</country>
</aff>
<aff id="A03">
<institution><![CDATA[,Instituto Nacional de Investigación Nuclear  ]]></institution>
<addr-line><![CDATA[Ocoyoacac ]]></addr-line>
<country>México</country>
</aff>
<aff id="A04">
<institution><![CDATA[,Universidad Autónoma del Estado de México  ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
<country>México</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>12</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>12</month>
<year>2013</year>
</pub-date>
<volume>17</volume>
<numero>4</numero>
<fpage>583</fpage>
<lpage>592</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_arttext&amp;pid=S1405-55462013000400011&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_abstract&amp;pid=S1405-55462013000400011&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_pdf&amp;pid=S1405-55462013000400011&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="es"><p><![CDATA[El problema del desbalance de clases puede producir un deterioro importante en la efectividad del clasificador, en particular con los patrones de las clases menos representadas. El desbalance en el conjunto de entrenamiento (CE) significa que una clase es representada por una gran cantidad de patrones mientras que otra es representada por muy pocos. Los estudios existentes se encuentran orientados principalmente a tratar problemas de dos clases, no obstante, un importante número de problemas reales se encuentran representados por múltiples clases, donde resulta más difícil su discriminación para el clasificador. El éxito de la Mezcla de Expertos (ME) se basa en el criterio de "divide y vencerás". En su funcionamiento general, el problema es dividido en fragmentos más pequeños que serán estudiados por separado. De este modo, el modelo general es poco influenciado por las dificultades individuales de sus componentes. La idea principal del estudio aquí mostrado, es construir una Mezcla de expertos cuyos miembros serán entrenados en una parte del problema general y de este modo, mejorar el rendimiento del clasificador en el contexto de múltiples clases. Para este fin, se hace uso de los métodos conocidos como Error-correcting output codes (ECOC), que permiten realizar una codificación en parejas de clases el problema de estudio. Resultados experimentales sobre conjuntos de datos reales, muestran la viabilidad de la estrategia aquí propuesta.]]></p></abstract>
<abstract abstract-type="short" xml:lang="en"><p><![CDATA[Imbalanced training sample means that one class is represented by a large number of examples while the other is represented by only a few. This problem may produce an important deterioration of the classifier performance, in particular with patterns belonging to the less represented classes. The majority of the studies in this area are oriented, mainly, to resolve problems with two classes. However, many real problems are represented by multiple classes, where it is more difficult to discriminate between them. The success of the Mixture of Experts (ME) strategy is based on the criterion of "divide and win". The general process divides the global problem into smaller fragments which will be studied separately. In this way, the general model has few influences of the individual difficulties (of their members). In this paper we propose a strategy for handling the class imbalance problem for data sets with multiple classes. For that, we integrate a mixture of experts whose members will be trained as a part of the general problem and, in this way, will improve the behavior of the whole system. For dividing the problem we employ the called Error-correcting output codes (ECOC) methods, when the classes are codified in pairs, which are considered for training the mixture of experts. Experiments with real datasets demonstrate the viability of the proposed strategy.]]></p></abstract>
<kwd-group>
<kwd lng="es"><![CDATA[Desbalance de clases]]></kwd>
<kwd lng="es"><![CDATA[mezcla de expertos]]></kwd>
<kwd lng="es"><![CDATA[fusión]]></kwd>
<kwd lng="es"><![CDATA[múltiples clases]]></kwd>
<kwd lng="es"><![CDATA[error correcting output codes (ECOC)]]></kwd>
<kwd lng="en"><![CDATA[Class imbalance]]></kwd>
<kwd lng="en"><![CDATA[fusion]]></kwd>
<kwd lng="en"><![CDATA[mixture of experts]]></kwd>
<kwd lng="en"><![CDATA[error correcting output codes (ECOC)]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[  	    <p align="justify"><font face="verdana" size="4">Art&iacute;culos regulares</font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="4"><b>Tratamiento del desbalance en problemas con m&uacute;ltiples clases</b> <b>con ECOC</b></font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="3"><b>Handling the Multi&#45;Class Imbalance Problem using ECOC</b></font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="2"><b>Rosa Mar&iacute;a Valdovinos Rosas<sup>1</sup>, Rosalinda Abad S&aacute;nchez, Roberto Alejo Eleuterio<sup>2</sup>, Edgar Herrera Arteaga <sup>1,3</sup>, Adri&aacute;n Trueba Espinosa<sup>4</sup></b></font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><sup><i>1</i></sup> <i>Universidad Aut&oacute;noma del Estado de M&eacute;xico, Facultad de Ingenier&iacute;a, Ciudad Universitaria, Toluca, M&eacute;xico.</i> <a href="mailto:li_rmvr@hotmail.com">li_rmvr@hotmail.com</a></font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><sup><i>2</i></sup> <i>Tecnol&oacute;gico de Estudios Superiores de Jocotitl&aacute;n, M&eacute;xico</i>. <a href="mailto:ralejoll@hotmail.com">ralejoll@hotmail.com</a></font></p>  	    <p align="justify"><font face="verdana" size="2"><sup><i>3</i></sup> <i>Instituto Nacional de Investigaci&oacute;n Nuclear ININ, La Marquesa, Ocoyoacac,</i> <i>M&eacute;xico.</i> <a href="mailto:edgar.herrera@inin.gob.mx">edgar.herrera@inin.gob.mx</a></font></p>  	    <p align="justify"><font face="verdana" size="2"><sup><i>4</i></sup> <i>Centro Universitario UAEM Texoco, M&eacute;xico.</i></font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2">Article received on 05/11/2012    <br> 	Accepted on 21/06/2013</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Resumen</b></font></p>  	    <p align="justify"><font face="verdana" size="2">El problema del desbalance de clases puede producir un deterioro importante en la efectividad del clasificador, en particular con los patrones de las clases menos representadas. El desbalance en el conjunto de entrenamiento (CE) significa que una clase es representada por una gran cantidad de patrones mientras que otra es representada por muy pocos. Los estudios existentes se encuentran orientados principalmente a tratar problemas de dos clases, no obstante, un importante n&uacute;mero de problemas reales se encuentran representados por m&uacute;ltiples clases, donde resulta m&aacute;s dif&iacute;cil su discriminaci&oacute;n para el clasificador. El &eacute;xito de la Mezcla de Expertos (ME) se basa en el criterio de "divide y vencer&aacute;s". En su funcionamiento general, el problema es dividido en fragmentos m&aacute;s peque&ntilde;os que ser&aacute;n estudiados por separado. De este modo, el modelo general es poco influenciado por las dificultades individuales de sus componentes. La idea principal del estudio aqu&iacute; mostrado, es construir una Mezcla de expertos cuyos miembros ser&aacute;n entrenados en una parte del problema general y de este modo, mejorar el rendimiento del clasificador en el contexto de m&uacute;ltiples clases. Para este fin, se hace uso de los m&eacute;todos conocidos como <i>Error&#45;correcting output codes</i> (ECOC), que permiten realizar una codificaci&oacute;n en parejas de clases el problema de estudio. Resultados experimentales sobre conjuntos de datos reales, muestran la viabilidad de la estrategia aqu&iacute; propuesta.</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Palabras clave:</b> Desbalance de clases, mezcla de expertos, fusi&oacute;n, m&uacute;ltiples clases, error correcting output codes (ECOC).</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Abstract</b></font></p>  	    <p align="justify"><font face="verdana" size="2">Imbalanced training sample means that one class is represented by a large number of examples while the other is represented by only a few. This problem may produce an important deterioration of the classifier performance, in particular with patterns belonging to the less represented classes. The majority of the studies in this area are oriented, mainly, to resolve problems with two classes. However, many real problems are represented by multiple classes, where it is more difficult to discriminate between them. The success of the Mixture of Experts (ME) strategy is based on the criterion of "divide and win". The general process divides the global problem into smaller fragments which will be studied separately. In this way, the general model has few influences of the individual difficulties (of their members). In this paper we propose a strategy for handling the class imbalance problem for data sets with multiple classes. For that, we integrate a mixture of experts whose members will be trained as a part of the general problem and, in this way, will improve the behavior of the whole system. For dividing the problem we employ the called <i>Error&#45;correcting output codes</i> (ECOC) methods, when the classes are codified in pairs, which are considered for training the mixture of experts. Experiments with real datasets demonstrate the viability of the proposed strategy.</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Keywords:</b> Class imbalance, fusion, mixture of experts, error correcting output codes (ECOC).</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><a href="/pdf/cys/v17n4/v17n4a11.pdf" target="_blank">DESCARGAR ART&Iacute;CULO EN FORMATO PDF</a></font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Agradecimientos</b></font></p>  	    <p align="justify"><font face="verdana" size="2">Este trabajo fue realizado gracias al apoyo recibido de los proyectos: 3072/2011 de la UAEM, PROMEP/103.5/12/4783 de las SEP, SDMAIA&#45;010 del TESjo, UR&#45;001 del ININ.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><b>Referencias</b></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>1. Alejo, R., Sotoca, J.M., &amp; Casa&ntilde;, G.A. (2008).</b> An empirical study for the multi&#45;class imbalance problem with neural networks. <i>Pattern Recognition, Image Analysis and Applications. Lecture Notes in Computer Science,</i> 5197, 479&#45;486.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062715&pid=S1405-5546201300040001100001&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>2. Barandela, R., S&aacute;nchez, J.S., Garc&iacute;a, V., &amp; Rangel, E. (2001).</b> Fusion of techniques for handling the imbalanced training sample problem. <i>6<sup>th</sup> Ibero&#45;American Symposium on Pattern Recognition, Florian&oacute;polis, Brazil,</i> 34&#45;40.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062717&pid=S1405-5546201300040001100002&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>3. Batista, G., Patri, R.C., &amp; Monard, M.C. (2004).</b> A study of the Behavior of several methods for Balancing Machine Learning Training data. <i>ACM SIGKDD Explorations Newsletter,</i> 6(1), 20&#45;29.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062719&pid=S1405-5546201300040001100003&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>4. Breiman, L. (1998).</b> Arcing classifiers. <i>The Annals</i> <i>of Statistics.</i> 26(3), 801 &#45;849.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062721&pid=S1405-5546201300040001100004&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>5. Chawla, N.V., Bowyer, K.W., Hall, L.O., &amp; Kegelmeyer. W.P. (2000).</b> SMOTE: synthetic minority over&#45;sampling technique. <i>Journal of Artificial Intelligence Research,</i> 16(1), 321&#45;357.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062723&pid=S1405-5546201300040001100005&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>6. Dietterich, T.G. &amp; Bakiri, G. (1994).</b> Solving multiclass learning problems via error&#45;correcting output codes. <i>Journal of Artificial Intelligence</i> <i>Research,</i> 2(1), 263&#45;286.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062725&pid=S1405-5546201300040001100006&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>7. Dietterich, T.G. (1997).</b> Machine learning research: four current directions. <i>AI Magazine,</i> 18(4), 97&#45;136.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062727&pid=S1405-5546201300040001100007&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>8. Kong, E.B. &amp; Dietterich, T.G. (1995).</b> Error&#45;Correcting Output Coding Corrects Bias and Variance. <i>12<sup>th</sup> International Conference on Machine Learning.</i> California, USA, 313&#45;321.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062729&pid=S1405-5546201300040001100008&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>9. Eavis, T. &amp; Japkowicz, N. (2000).</b> A recognition&#45;based alternative to discrimination&#45;base multilayer perceptrons. <i>Advances in Artificial Intelligence, Lecture Notes in Computer Science, 1822,</i> 280&#45;292.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062731&pid=S1405-5546201300040001100009&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>      <!-- ref --><p align="justify"><font face="verdana" size="2"><b>10. Kuncheva, L.I. (2000).</b> Clustering&#45;and&#45;selection model for classifier combination. <i>4<sup>th</sup> International Conference on Knowledge&#45;Based Intelligent Engineering Systems and Allied Technologies (KES'2000).</i> Brighton, UK, 1, 185&#45;188.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062733&pid=S1405-5546201300040001100010&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>11. Kuncheva, L.I. (2001).</b> Using measures of similarity and inclusion of multiple classifier fusion by decision templates. <i>Fuzzy Sets and Systems,</i> 122(3), 401 &#45;407.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062735&pid=S1405-5546201300040001100011&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>12. Kuncheva, L.I. &amp; Whitaker, C.J. (2003).</b> Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. <i>Machine Learning,</i> 51(2), 181 &#45;207.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062737&pid=S1405-5546201300040001100012&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>13. Kuncheva, L.I. (2005).</b> Using diversity measures for generating error&#45;correcting output codes in classifier ensemble. <i>Pattern Recognition Letters,</i> 26, 83&#45;90.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062739&pid=S1405-5546201300040001100013&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>14. Congalton, R.G. &amp; Green, K. (1999).</b> <i>Assessing the Accuracy of Remotely Sensed Data: Principles and Practices.</i> Boca Raton: Lewis Publications.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062741&pid=S1405-5546201300040001100014&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>15. Daqi, G., Wei, W., &amp; Jianliang, G. (2007).</b> Class&#45;modular multi&#45;layer perceptions, task decomposition and virtually balanced training subsets. <i>International Joint Conference on Neural Networks,</i> Orlando, Florida, USA, 2153&#45;2158.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062743&pid=S1405-5546201300040001100015&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>16. Soda, P. &amp; lannello, G. (2010).</b> Decomposition Methods and Learning approaches for Imbalanced Dataset: An Experimental Integration. <i>20<sup>th</sup> International Conference on Pattern Recognition,</i> Istanbul, Turkey, 3117&#45;3120.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062745&pid=S1405-5546201300040001100016&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>17. Demsar, J. (2006).</b> Statistical comparisons of classifiers over multiple data sets. <i>Journal of Machine Learning Research,</i> 7(2006), 1&#45;30.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062747&pid=S1405-5546201300040001100017&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>18. Garcia, S. &amp; Herrera, F. (2008).</b> An extension on statistical comparisons of classifiers over multiple data sets for all pairwise comparisons. <i>Journal of Machine Learning Research,</i> 9(12), 2677&#45;2694.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062749&pid=S1405-5546201300040001100018&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2"><b>19. Dasarathy, B.V. (1991).</b> <i>Nearest Neighbor (NN) Norms: nn Pattern Clasification Techniques.</i> Los Alamitos, CA: IEEE Computer Society Press.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=2062751&pid=S1405-5546201300040001100019&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>      ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Alejo]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Sotoca]]></surname>
<given-names><![CDATA[J.M.]]></given-names>
</name>
<name>
<surname><![CDATA[Casañ]]></surname>
<given-names><![CDATA[G.A.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[An empirical study for the multi-class imbalance problem with neural networks. Pattern Recognition, Image Analysis and Applications]]></article-title>
<source><![CDATA[Lecture Notes in Computer Science]]></source>
<year>2008</year>
<volume>5197</volume>
<page-range>479-486</page-range></nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Barandela]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Sánchez]]></surname>
<given-names><![CDATA[J.S.]]></given-names>
</name>
<name>
<surname><![CDATA[García]]></surname>
<given-names><![CDATA[V.]]></given-names>
</name>
<name>
<surname><![CDATA[Rangel]]></surname>
<given-names><![CDATA[E.]]></given-names>
</name>
</person-group>
<source><![CDATA[Fusion of techniques for handling the imbalanced training sample problem. 6th Ibero-American Symposium on Pattern Recognition]]></source>
<year>2001</year>
<page-range>34-40</page-range><publisher-loc><![CDATA[Florianópolis ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Batista]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Patri]]></surname>
<given-names><![CDATA[R.C.]]></given-names>
</name>
<name>
<surname><![CDATA[Monard]]></surname>
<given-names><![CDATA[M.C.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A study of the Behavior of several methods for Balancing Machine Learning Training data]]></article-title>
<source><![CDATA[ACM SIGKDD Explorations Newsletter]]></source>
<year>2004</year>
<volume>6</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>20-29</page-range></nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Breiman]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Arcing classifiers]]></article-title>
<source><![CDATA[The Annals of Statistics]]></source>
<year>1998</year>
<volume>26</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>801 -849</page-range></nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chawla]]></surname>
<given-names><![CDATA[N.V.]]></given-names>
</name>
<name>
<surname><![CDATA[Bowyer]]></surname>
<given-names><![CDATA[K.W.]]></given-names>
</name>
<name>
<surname><![CDATA[Hall]]></surname>
<given-names><![CDATA[L.O.]]></given-names>
</name>
<name>
<surname><![CDATA[Kegelmeyer]]></surname>
<given-names><![CDATA[W.P.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[SMOTE: synthetic minority over-sampling technique]]></article-title>
<source><![CDATA[Journal of Artificial Intelligence Research]]></source>
<year>2000</year>
<volume>16</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>321-357</page-range></nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dietterich]]></surname>
<given-names><![CDATA[T.G.]]></given-names>
</name>
<name>
<surname><![CDATA[Bakiri]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Solving multiclass learning problems via error-correcting output codes]]></article-title>
<source><![CDATA[Journal of Artificial Intelligence Research]]></source>
<year>1994</year>
<volume>2</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>263-286</page-range></nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dietterich]]></surname>
<given-names><![CDATA[T.G.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Machine learning research: four current directions]]></article-title>
<source><![CDATA[AI Magazine]]></source>
<year>1997</year>
<volume>18</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>97-136</page-range></nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kong]]></surname>
<given-names><![CDATA[E.B.]]></given-names>
</name>
<name>
<surname><![CDATA[Dietterich]]></surname>
<given-names><![CDATA[T.G.]]></given-names>
</name>
</person-group>
<source><![CDATA[Error-Correcting Output Coding Corrects Bias and Variance. 12th International Conference on Machine Learning]]></source>
<year>1995</year>
<page-range>313-321</page-range><publisher-loc><![CDATA[^eCalifornia California]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Eavis]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
<name>
<surname><![CDATA[Japkowicz]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
</person-group>
<source><![CDATA[A recognition-based alternative to discrimination-base multilayer perceptrons. Advances in Artificial Intelligence]]></source>
<year>2000</year>
<volume>1822</volume>
<page-range>280-292</page-range></nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kuncheva]]></surname>
<given-names><![CDATA[L.I.]]></given-names>
</name>
</person-group>
<source><![CDATA[Clustering-and-selection model for classifier combination. 4th International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies (KES'2000)]]></source>
<year>2000</year>
<volume>1</volume>
<page-range>185-188</page-range><publisher-loc><![CDATA[Brighton ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kuncheva]]></surname>
<given-names><![CDATA[L.I.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Using measures of similarity and inclusion of multiple classifier fusion by decision templates]]></article-title>
<source><![CDATA[Fuzzy Sets and Systems]]></source>
<year>2001</year>
<volume>122</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>401 -407</page-range></nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kuncheva]]></surname>
<given-names><![CDATA[L.I.]]></given-names>
</name>
<name>
<surname><![CDATA[Whitaker]]></surname>
<given-names><![CDATA[C.J.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy]]></article-title>
<source><![CDATA[Machine Learning]]></source>
<year>2003</year>
<volume>51</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>181 -207</page-range></nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kuncheva]]></surname>
<given-names><![CDATA[L.I.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Using diversity measures for generating error-correcting output codes in classifier ensemble]]></article-title>
<source><![CDATA[Pattern Recognition Letters]]></source>
<year>2005</year>
<volume>26</volume>
<page-range>83-90</page-range></nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Congalton]]></surname>
<given-names><![CDATA[R.G.]]></given-names>
</name>
<name>
<surname><![CDATA[Green]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
</person-group>
<source><![CDATA[Assessing the Accuracy of Remotely Sensed Data: Principles and Practices]]></source>
<year>1999</year>
<publisher-loc><![CDATA[Boca Raton ]]></publisher-loc>
<publisher-name><![CDATA[Lewis Publications]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Daqi]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Wei]]></surname>
<given-names><![CDATA[W.]]></given-names>
</name>
<name>
<surname><![CDATA[Jianliang]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
</person-group>
<source><![CDATA[Class-modular multi-layer perceptions, task decomposition and virtually balanced training subsets. International Joint Conference on Neural Networks]]></source>
<year>2007</year>
<page-range>2153-2158</page-range><publisher-loc><![CDATA[Orlando^eFlorida Florida]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Soda]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
<name>
<surname><![CDATA[lannello]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
</person-group>
<source><![CDATA[Decomposition Methods and Learning approaches for Imbalanced Dataset: An Experimental Integration. 20th International Conference on Pattern Recognition]]></source>
<year>2010</year>
<page-range>3117-3120</page-range><publisher-loc><![CDATA[Istanbul ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Demsar]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Statistical comparisons of classifiers over multiple data sets]]></article-title>
<source><![CDATA[Journal of Machine Learning Research]]></source>
<year>2006</year>
<volume>7</volume>
<numero>2006</numero>
<issue>2006</issue>
<page-range>1-30</page-range></nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Garcia]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Herrera]]></surname>
<given-names><![CDATA[F.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[An extension on statistical comparisons of classifiers over multiple data sets for all pairwise comparisons]]></article-title>
<source><![CDATA[Journal of Machine Learning Research]]></source>
<year>2008</year>
<volume>9</volume>
<numero>12</numero>
<issue>12</issue>
<page-range>2677-2694</page-range></nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dasarathy]]></surname>
<given-names><![CDATA[B.V.]]></given-names>
</name>
</person-group>
<source><![CDATA[Nearest Neighbor (NN) Norms: nn Pattern Clasification Techniques]]></source>
<year>1991</year>
<publisher-loc><![CDATA[Los Alamitos^eCA CA]]></publisher-loc>
<publisher-name><![CDATA[IEEE Computer Society Press]]></publisher-name>
</nlm-citation>
</ref>
</ref-list>
</back>
</article>
