<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>2007-3607</journal-id>
<journal-title><![CDATA[PAAKAT: revista de tecnología y sociedad]]></journal-title>
<abbrev-journal-title><![CDATA[PAAKAT: rev. tecnol. soc.]]></abbrev-journal-title>
<issn>2007-3607</issn>
<publisher>
<publisher-name><![CDATA[Universidad de Guadalajara, Sistema de Universidad Virtual]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S2007-36072022000200001</article-id>
<article-id pub-id-type="doi">10.32870/pk.a12n23.742</article-id>
<title-group>
<article-title xml:lang="es"><![CDATA[Modelos y buenas prácticas evaluativas para detectar impactos, riesgos y daños de la inteligencia artificial]]></article-title>
<article-title xml:lang="en"><![CDATA[Models and good evaluative practices to detect impacts, risks and damages of artificial intelligence]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Aguirre Sala]]></surname>
<given-names><![CDATA[Jorge Francisco]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
</contrib-group>
<aff id="Af1">
<institution><![CDATA[,Universidad Autónoma de Nuevo León  ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
<country>Mexico</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>00</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>00</month>
<year>2022</year>
</pub-date>
<volume>12</volume>
<numero>23</numero>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_arttext&amp;pid=S2007-36072022000200001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_abstract&amp;pid=S2007-36072022000200001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_pdf&amp;pid=S2007-36072022000200001&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="es"><p><![CDATA[Resumen Tomando como punto de partida el ejemplificar y reconocer los impactos, riesgos y daños causados por algunos sistemas de inteligencia artificial, y bajo el argumento de que la ética de la inteligencia artificial y su marco jurídico actual son insuficientes, el primer objetivo de este trabajo es analizar los modelos y prácticas evaluativas de los impactos algorítmicos para estimar cuáles son los más deseables. Como segundo objetivo se busca mostrar qué elementos deben poseer las evaluaciones de impacto algorítmico. La base teórica para el análisis de modelos, tomada de Hacker (2018), parte de mostrar la discriminación por falta de garantías para que los datos de entrada sean representativos, completos y depurados de sesgos, en particular del sesgo histórico proveniente de representaciones hechas por intermediarios. El diseño para descubrir el instrumento de evaluación más deseable establece una criba entre los modelos y su respectiva inclusión de los elementos presentes en las mejores prácticas a nivel global. El análisis procuró revisar todas las evaluaciones de impacto algorítmico en la literatura atingente de los años 2020 y 2021 para recabar las lecciones más significativas de las buenas prácticas de evaluación. Los resultados arrojan la conveniencia de enfocarse en el modelo del riesgo y en seis elementos imprescindibles en las evaluaciones. En las conclusiones se sugieren propuestas para transitar hacia expresiones cuantitativas de los aspectos cualitativos, a la vez que advierten de las dificultades para construir una fórmula estandarizada de evaluación. Se propone establecer cuatro niveles: impactos neutros, riesgos, daños reversibles e irreversibles, así como cuatro acciones de protección: prevención de riesgos, mitigación, reparación y prohibición.]]></p></abstract>
<abstract abstract-type="short" xml:lang="en"><p><![CDATA[Abstract Starting from exemplifying and recognizing the impacts, risks and damages caused by some artificial intelligence systems, and under the argument that the ethics of artificial intelligence and its current legal framework are insufficient, the first objective of this paper is to analyze the models and evaluative practices of algorithmic impacts to astimate which are the most desirable. The second objective is to show what elements algorithmic impact assessments should have. The theoretical basis for the analysis of models, taken fromHacker (2018), starts from showing the discrimination due to lack of guarantees that the input data is representative, complete, and purged of biases, in particular historical bias coming from representations made by intermediaries. The design to discover the most desirable evaluation instrument establishes a screening among models and their respective inclusion of the elements present in the best practices at a global level. The analysis sought to review all algorithmic impact evaluations in the relevant literature at the years 2020 and 2021 to gather the most significant lessons of good evaluation practices. The results show the convenience of focusing on the risk model and six essential elements in evaluations. The conclusions suggest proposals to move towards quantitative expressions of qualitative aspects, while warning of the difficulties in building a standardized evaluation formula. It is proposed to establish four levels: neutral impacts, risks, reversible and irreversible damage, as well as four protection actions: risk prevention, mitigation, repair and prohibition.]]></p></abstract>
<kwd-group>
<kwd lng="es"><![CDATA[Riesgos algorítmicos]]></kwd>
<kwd lng="es"><![CDATA[enfoques evaluativos]]></kwd>
<kwd lng="es"><![CDATA[decisiones humanas sobre la inteligencia artificial]]></kwd>
<kwd lng="es"><![CDATA[sectores y dominios]]></kwd>
<kwd lng="en"><![CDATA[Algorithmic risks]]></kwd>
<kwd lng="en"><![CDATA[evaluative approaches]]></kwd>
<kwd lng="en"><![CDATA[human decisions on artificial intelligence]]></kwd>
<kwd lng="en"><![CDATA[sectors and domains]]></kwd>
</kwd-group>
</article-meta>
</front><back>
<ref-list>
<ref id="B1">
<nlm-citation citation-type="book">
<collab>Ada Lovelace Institute</collab>
<collab>AI Now Institute and Open Government Partnership</collab>
<source><![CDATA[Algorithmic Accountability for the Public Sector]]></source>
<year>2021</year>
<publisher-name><![CDATA[Ada Lovelace Institute]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B2">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Aizenberg]]></surname>
<given-names><![CDATA[E.]]></given-names>
</name>
<name>
<surname><![CDATA[Van Den Hoven]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Designing for human rights in AI]]></article-title>
<source><![CDATA[Big Data &amp; Society]]></source>
<year>2020</year>
<volume>7</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>1-14</page-range></nlm-citation>
</ref>
<ref id="B3">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Andrade]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
<name>
<surname><![CDATA[Kontschieder]]></surname>
<given-names><![CDATA[V.]]></given-names>
</name>
</person-group>
<source><![CDATA[AI Impact Assessment: A Policy Prototyping Experiment]]></source>
<year>2021</year>
<publisher-name><![CDATA[Open Loop]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B4">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Argelich]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Smart Contracts O Code Is Law]]></article-title>
<source><![CDATA[InDret]]></source>
<year>2020</year>
<numero>2</numero>
<issue>2</issue>
<page-range>1-41</page-range></nlm-citation>
</ref>
<ref id="B5">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cortina]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Ética de la inteligencia artificial]]></article-title>
<source><![CDATA[Anales de la Real Academia de Ciencias Morales y Políticas]]></source>
<year>2019</year>
<numero>96</numero>
<issue>96</issue>
<page-range>379-94</page-range></nlm-citation>
</ref>
<ref id="B6">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dalli]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Artificial intelligence act. European Parliament]]></article-title>
<source><![CDATA[European Parliamentary Research Service]]></source>
<year>2021</year>
</nlm-citation>
</ref>
<ref id="B7">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dastin]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Amazon scraps secret AI recruiting tool that showed bias against women]]></article-title>
<source><![CDATA[Reuters]]></source>
<year>2018</year>
</nlm-citation>
</ref>
<ref id="B8">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[De Cremer]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Kasparov]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[The ethical AI-paradox: why better technology needs more and not less human responsibility]]></article-title>
<source><![CDATA[AI and Ethics]]></source>
<year>2022</year>
<numero>2</numero>
<issue>2</issue>
<page-range>1-4</page-range></nlm-citation>
</ref>
<ref id="B9">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[De Moya]]></surname>
<given-names><![CDATA[J-F.]]></given-names>
</name>
<name>
<surname><![CDATA[Pallud]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[From panopticon to heautopticon: A new form of surveillance introduced by quantified-self practices]]></article-title>
<source><![CDATA[Information System Journal]]></source>
<year>2020</year>
<volume>30</volume>
<numero>6</numero>
<issue>6</issue>
<page-range>940-76</page-range></nlm-citation>
</ref>
<ref id="B10">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Del Río]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[China publica código ético para regular la Inteligencia Artificial, ¿qué diría Isaac Asimov?]]></article-title>
<source><![CDATA[Emprendedor]]></source>
<year>2022</year>
</nlm-citation>
</ref>
<ref id="B11">
<nlm-citation citation-type="book">
<collab>European Union, Agency for Fundamental Rights</collab>
<source><![CDATA[Getting the future right. Artificial Intelligence and Fundamental Rights]]></source>
<year>2020</year>
<publisher-name><![CDATA[Publications Office of the European Union]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B12">
<nlm-citation citation-type="book">
<collab>European Union, European Commission</collab>
<source><![CDATA[Annexes accompanying the Proposal for a Regulation of the European Parliament and of the Council. Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts]]></source>
<year>2021</year>
<publisher-name><![CDATA[European Commission]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B13">
<nlm-citation citation-type="book">
<collab>European Union, European Commission</collab>
<source><![CDATA[Commission staff working document impact assessment. Accompanying the Proposal for a Regulation of the European Parliament and of the Council. Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts]]></source>
<year>2021</year>
<publisher-name><![CDATA[European Commission]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B14">
<nlm-citation citation-type="book">
<collab>Expert Group on Architecture for AI Principles to be Practiced</collab>
<source><![CDATA[AI Governance in Japan]]></source>
<year>2021</year>
<publisher-name><![CDATA[Ministry of Economy, Trade and Industry]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B15">
<nlm-citation citation-type="">
<collab>Gobierno de México</collab>
<source><![CDATA[Principios y guía de análisis de impacto para el desarrollo y uso de sistemas basados en inteligencia artificial en la administración pública federal. Secretaría de la Función Pública]]></source>
<year>2018</year>
</nlm-citation>
</ref>
<ref id="B16">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Golbin]]></surname>
<given-names><![CDATA[I.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Algorithmic impact assessments: What are they and why do you need them?]]></article-title>
<source><![CDATA[PricewaterhouseCoopers US]]></source>
<year>2021</year>
</nlm-citation>
</ref>
<ref id="B17">
<nlm-citation citation-type="journal">
<collab>Government of Canada</collab>
<article-title xml:lang=""><![CDATA[Algorithmic Impact Assessment Tool]]></article-title>
<source><![CDATA[Government of Canada]]></source>
<year>2021</year>
</nlm-citation>
</ref>
<ref id="B18">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hacker]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU Law]]></article-title>
<source><![CDATA[Common Market Law Review]]></source>
<year>2018</year>
<volume>55</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>1143-83</page-range></nlm-citation>
</ref>
<ref id="B19">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hartmann]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
<name>
<surname><![CDATA[Wenzelburger]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Uncertainty, risk, and the use of algorithms in policy decisions: a case study on criminal justice in the USA]]></article-title>
<source><![CDATA[Policy Sciences]]></source>
<year>2021</year>
<volume>54</volume>
<page-range>269-87</page-range></nlm-citation>
</ref>
<ref id="B20">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Henz]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Ethical and legal responsibility for Artificial Intelligence]]></article-title>
<source><![CDATA[Discover Artificial Intelligence]]></source>
<year>2021</year>
<volume>1</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>1-5</page-range></nlm-citation>
</ref>
<ref id="B21">
<nlm-citation citation-type="book">
<collab>Honda-Robotics</collab>
<source><![CDATA[ASIMO. El robot humanoide más avanzado del mundo]]></source>
<year></year>
<publisher-name><![CDATA[Honda]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B22">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lauer]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[You cannot have AI ethics without ethics]]></article-title>
<source><![CDATA[AI and Ethics]]></source>
<year>2021</year>
<volume>1</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>21-5</page-range></nlm-citation>
</ref>
<ref id="B23">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Martínez-Ramil]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Is the EU human rights legal framework able to cope with discriminatory AI?]]></article-title>
<source><![CDATA[IDP. Revista de internet, derecho y política]]></source>
<year>2021</year>
<numero>34</numero>
<issue>34</issue>
<page-range>1-14</page-range></nlm-citation>
</ref>
<ref id="B24">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Metcalf]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Moss]]></surname>
<given-names><![CDATA[E.]]></given-names>
</name>
<name>
<surname><![CDATA[Watkins]]></surname>
<given-names><![CDATA[E. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Singh]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Elish]]></surname>
<given-names><![CDATA[M. C.]]></given-names>
</name>
</person-group>
<source><![CDATA[Assembling Accountability. Algorithmic Impact Assessment for the Public Interest]]></source>
<year>2021</year>
<publisher-name><![CDATA[Society Research Institute]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B25">
<nlm-citation citation-type="book">
<collab>Organización para la Cooperación y el Desarrollo Económicos (OECD.AI)</collab>
<source><![CDATA[OECD AI Policy Observatory]]></source>
<year>2019</year>
<publisher-name><![CDATA[OECD]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B26">
<nlm-citation citation-type="book">
<collab>Organización para la Cooperación y el Desarrollo Económicos (OECD.AI)</collab>
<source><![CDATA[OECD AI Policy Observatory]]></source>
<year>2021</year>
<publisher-name><![CDATA[OECD]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B27">
<nlm-citation citation-type="book">
<collab>Organización de las Naciones Unidas para la Educación, la Ciencias y la Cultura (UNESCO)</collab>
<article-title xml:lang=""><![CDATA[Proyecto de texto de la recomendación sobre la ética de la inteligencia artificial]]></article-title>
<source><![CDATA[Informe de la Comisión de Ciencias sociales y Humanas]]></source>
<year>2021</year>
<page-range>13-42</page-range><publisher-name><![CDATA[UNESCO]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B28">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Ruckenstein]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Schüll]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[The Datafication of health]]></article-title>
<source><![CDATA[Annual Review of Anthropology]]></source>
<year>2017</year>
<volume>46</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>261-78</page-range></nlm-citation>
</ref>
<ref id="B29">
<nlm-citation citation-type="book">
<collab>Unión Europea</collab>
<source><![CDATA[Libro blanco sobre la inteligencia artificial. Un enfoque europeo orientado a la excelencia y la confianza]]></source>
<year>2020</year>
<publisher-name><![CDATA[Unión Europea]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B30">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Vought]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
</person-group>
<source><![CDATA[Guidance for Regulation of Artificial Intelligence Applications]]></source>
<year>2020</year>
<publisher-name><![CDATA[Executive Office of the President, Office of Management and Budget]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B31">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Yeung]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
</person-group>
<source><![CDATA[Responsibility and AI. A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework]]></source>
<year>2019</year>
<publisher-name><![CDATA[Council of Europe]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B32">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Google Photos Tags Two African-Americans As Gorillas Through Facial Recognition Software]]></article-title>
<source><![CDATA[Forbes]]></source>
<year>2015</year>
</nlm-citation>
</ref>
</ref-list>
</back>
</article>
