SciELO

Medicina y ética

Print version ISSN 0188-5022

Online version ISSN 2594-2166

Med. ética  vol. 35n. 4

Artículos

Ethical reflections on the impact and challenges of artificial intelligence in laboratory medicine

Román Collazo, Carlos Alberto*
http://orcid.org/0000-0002-8235-4165
Brenner, Jonathan**
http://orcid.org/0009-0005-1950-8124
Andrade Campoverde, Diego***
http://orcid.org/0000-0003-4652-7708

Abstract

The use of artificial intelligence (AI) in laboratory medicine (LM) has led to a qualitative leap in the diagnosis of diseases that afflict humans. The development of robots for measurement, calculation and prediction has increased the reliability, validity and reproducibility of AI diagnostic tests, leading to an easy choice of such technology in the clinical laboratory. However, AI in LM entails several ethical reflections that need to be considered. The incipient technology under development, the presence of cognitive biases in algorithms and data, the uncertainty of robot performance, technological limitations, the threat to privacy, and the absence of a legal framework open ethical conflicts that lacerate human equity, safety, and autonomy. The technological imperative of AI in LM must not overcome responsibility, nor infringe on the dignity of the person.

Keywords::
clinical laboratory, diagnosis, liability, precaution, morality, technology

1. Introduction

LM is one of the most important medical branches in human health care (1). Although various definitions persist, LM is considered the discipline of clinical medical sciences oriented to the quantitative measurement or qualitative evaluation of substances in biological samples for medical or research purposes. The aim is to improve the health status of the individual and the population in general (2).

LM articulates branches of knowledge such as biochemistry, physiology, anatomy and histology in the diagnosis of diseases that afflict human beings. It uses an arsenal of laboratory methods and techniques such as colorimetric, turbidimetric, enzymatic, potentiometric, immunological and molecular biology in order to improve the quality of medical diagnosis (3).

Currently, to increase the validity and reliability of diagnosis, AI has been introduced as a computational tool (4). Today it is an essential branch for the work of health care managers at all levels of care, where the emergence of ethical problems leads to a deep reflection by professionals (5).

In the area of LM, AI has burst into different fields and is envisioned to revolutionize the medical diagnosis of complex specialties such as pathological anatomy (6) and precision medicine in relevant diseases such as cancer (7). However, the use of AI in LM should consider emerging ethical conflicts in this biomedical field. What ethical conflicts related to safety, justice, safety and autonomy are envisioned in the use of AI in LM ( 8)? The aim of the article is to assess the use of AI in LM in human health care from a bioethical perspective.

2. Methods

The research was conducted following an argumentative approach using a documentary review of scientific literature. Two main thematic cores were considered: applications of AI and ethical conflicts of AI in LM. A search was made of articles and books published in the last 20 years in scientific databases such as Web of Science, Scopus, PubMed, Science Direct, Google Scholar, Redalyc and Latindex, Web pages of the World Health Organization (WHO) and blogs of personalities in English and Spanish. The search was carried out using descriptors and keywords (specific to the article) combined with Boolean connectors (and, and, and, or, or, or, not, not). A total of 125 articles were retrieved, of which 55 were discarded after reading the abstract or the full text because they were irrelevant or duplicated. The information sources were stored in the scientific information manager Zotero for thematic grouping and elaboration of content notes. The MAXQDA software was used to process the articles and search for thematic nodes using the content analysis method. The argumentative method was used to state positive and negative aspects of IA in LM. The paper outlines the rationale of AI and the main achievements of its application in LM. Subsequently, the ethical implications of AI in LM are discussed considering ethical principles such as responsibility, justice, autonomy and safety.

3.Results

3.1. Artificial Intelligence

AI emerged in the middle of the 20th century with the emergence of computers and the idea of computational problem solving based on propositional logic. However, the idea was discarded due to the impossibility of its imminent implementation in the practical sphere (9). The development of hardware, the increase in storage capacity and data processing speed has allowed the exponential development of this branch of computational sciences.

AI is set to be one of the great revolutions in the postmodern world. Its current use extends to different areas from so-called smart devices (watches, phones, computers, cars, biomedical devices) to work processes in industry, science, education, health and society (10). Words such as bigdata, chatbot, virtual assistants, smartphone, smartwatch, neural networks, machine learning and deep learning reach our days in an avalanche that traps us in a cybernetic swamp.

The definition of AI is broad, although the consensus conceptualizes it as the set of computer programs that allow storage, data processing and decision making based on previous experiences, simulating human learning (11). Previous experience is understood as a set of data that reflect reality.

There are two fundamental variants of AI today: non-mechanical robots and mechanical robots (12). Non-mechanical robots are computer programs that generate a non-mechanical response, such as chatbots, virtual assistants (Siri, Alexa, Copilot, etc.) and smart products, among others (watches, telephones, televisions), all of which are widely accepted by consumers. Mechanical robots involve a mechanical response, and their appearance can be humanoid or not. In this group we find androids, zoonoids, multi-articulated (industrial robots, domestic robots) among others.

AI can also be classified into strong, weak and general AI. Strong AI learns and generates autonomous responses from the robot, while weak AI generates the same (limited) responses and only a change in programming can modify the response. General AI is the one that allows diverse learning over time without forgetting them in the future. Because of its complexity, it is only a theoretical approach without development in practice (13).

The creation of learning machines (machine learning, deep learning and natural language processing) attempts to mimic the structure of the human brain based on artificial neural networks. These networks are nothing more than information processing elements interconnected and organized in layers, allowing communication through the input and output of information from the system (14).

Robot learning is achieved through three mechanisms: data supervision, unsupervised and reinforced. In the first case, the human being programs the robot’s actions according to what is considered correct and incorrect by the programmer. From the categorization of the input data, a propositional logic algorithm is established to generate a response. In the second case the primary input data and a set of initial logical rules lead to responses which are “decided by the robot” from the input information. The machine can incorporate new response patterns without the permanent assistance of the human being, although training supervised by the programmers is decisive in achieving robotic autonomy (10).

Reinforcement learning involves trial-and-error learning, receiving continuous feedback from the developer. The artificial agent reacts to signals from its environment that represent the state of the environment. The actions performed by the agent influence the state of the environment. The main objective is to make decisions that ensure maximum reward. When the machine makes a correct decision, the supervisor gives a reward for the last action performed in the form of an evaluation (14).

A simple analysis reveals a set of advantages of robots over man in functions where health is at risk or where greater efficiency in production processes is sought (no need for rest, food and sleep periods). In the opinion of the philanthropist Bill Gates, (15) AI is “as fundamental as the creation of the microprocessor, the personal computer, the Internet and the cell phone”, and is set to revolutionize areas of our current life such as education, health and work processes.

3.2. Artificial intelligence and laboratory medicine

The use of AI in health care is beginning to revolutionize this service and human right in postmodern society. It has relevant applications in surgery, patient rehabilitation and medical diagnosis (16), although it is still in an initial phase, far from its greatest potential (17).

The automation of the clinical laboratory has been one of the first aspirations in the area of medical diagnostics. In the 1990s, robots emerged with the intention of improving diagnostic processes. They increased the sample processing capacity, the reliability of the results and decreased the response time, so important for an efficient diagnosis (18). Its application was mainly in the analytical stage of results, limiting the potential of this technology.

Nowadays, the clinical laboratory is organized in modular systems that perform the corresponding functions of recording, processing and output of information. Modular automated robots perform functions such as transporting the biological specimen, taking samples for analysis, performing diagnostic tests and presenting the results to the patient. However, an intelligent laboratory system that has sufficient flexibility to realize a fully automated process has not been achieved to date (19).

The use of weak AI in LM has different scopes such as test selection and prediction, generation and interpretation of results in the diagnostic process (17). In the early years, the use of AI in LM consisted of data processing. From light absorbance or absorbance measurements, standard curves (linear regression mathematical models) are constructed to estimate concentrations of certain analytes such as cholesterol, glucose, creatinine, among others (4). A similar procedure is used to calculate viral load using real-time PCR (qPCR). The calculation of the Ct (cycle threshold) in qPCR assays and its graphic representation as a function of the number of copies of the genetic material allows extrapolating the values of unknown samples, assigning a pathological status or not, depending on the result (20). Another example is the calculation of disease risk using mathematical models and patient clinical variables such as serum protein concentrations and the identification of disease-associated monoclonal gammopathies using the protein electrophoresis technique (21). The AI system makes it possible to identify the area under the curve for each subgroup of blood serum proteins and diagnose the gammopathy in the patient. This system has not yet been widely adopted by clinical laboratory entities, which continue to use the traditional method where the human specialist dictates the definitive diagnosis.

One of the areas of greatest contribution of AI to LM is image processing. The integration of artificial neural networks has made it possible to increase the performance of digital image processing with greater resolving power. This has been applied to specific areas such as uroanalysis (22), hematology (23) and oncology (24) to name a few.

By generating an image of the urinary sediment, the AI (machine learning) system can compare this unstructured data with a database previously introduced to the robot allowing it to reach a diagnosis of the patient’s condition. In the hematological field, diseases are diagnosed on the basis of cell morphology (25) (erythrocytes and malaria, for example), although there are reports of suboptimal diagnostic results due to low specificity (26).

Microbiology (27) and flow cytometry (28) also have tools that perform diagnosis based on the mathematical analysis of the images obtained from the culture of microorganisms or the data captured in the cytometry. The complex interpretation that a technologist must make for diagnosis is simplified to critical reading and approval by the technologist.

In the area of microbiology, AI plays a fundamental role in identifying the different microbial species and subspecies that make up the human microbiome. Clinical microbiology informatics is progressively using AI. Genomic information from bacterial isolates, metagenomic microbial results from original samples, mass spectra recorded from cultured bacterial isolates, and digital photographs are examples of huge data sets in clinical microbiology that can be used to construct AI diagnostics (29).

AI has contributed to the prediction of the status of oncology patients by predicting bone metastasis of prostate cancer (30). In addition, accurate diagnosis of lung (31), uterine (32), prostate (33) and glioma (34) cancers is reported with sensitivity and specificity greater than 95%. It has also played a relevant role in the individualization of cancer treatment by means of radiation (35), or in the selection of the ideal treatment (36) for the cancer patient (Watson for Oncolgy software, IBM). In all cases the sensitivity, specificity and ROC curve (Operator Response Curve) of the AI model used are sufficient to be the diagnostic or treatment method of choice.

Automation by means of AI has advantages in clinical diagnosis in its pre-analytical, analytical and post-analytical stages. In addition, it leads to increased efficiency and customer satisfaction with health services associated with improved service activities and reduced waiting time (37).

The pre-analytical stage achieves greater control of the biological specimens in the diagnostic process and their use by the different modules. The use of the specimen between the different modular units is optimized, achieving a better flow of samples. In addition, errors in the patient registration process and the clinical analysis to be performed are minimized. Also, analytical algorithm systems assess the relevance of diagnostic tests, relieving the responsibility of clinical laboratory managers in making decisions on which analytes to include in the patient’s diagnostic battery (38). It should be noted that AI could assess the inclusion or elimination of tests issued by the physician, which could be an area of conflict between man and technology.

In the analytical stage, AI has decreased the diagnostic time and increased the volume of activity of both patients and diagnostic tests to be performed. The most relevant contribution may be the rerouting of samples with questionable results to a new test to ensure the result. In addition, it reduces the technologist’s exposure to biological samples, which minimizes the risk of occupational accidents and the acquisition of diseases transmitted by blood or other fluids (38).

In the post-analytical stage, the process of reporting results, their validation and their corresponding issuance to health agencies is expedited. Diagnostic Decision Support Systems (DDSS) have been designed which contribute to decision-making in laboratory diagnosis, which is another area of potential conflict between the robot and the technologist (39). They also advocate the integration of laboratory results with the clinical-therapeutic condition of the patient, achieving a harmonization of the patient’s health-disease process.

4. Discussion

The WHO has promulgated the digitalization of medicine in the work agenda proposed for the period 2020-2025 (40). This has led to an increase in the implementation of technology in health services, as well as to ethical reflection on these technologies.

Even though the literature has enunciated ethical concerns about AI and medical care, the latent uncertainty surrounding AI and medicine merits taking up these aspects, although the primary focus has been from the principled approach of Childress and Beauchamp in medical ethics, the analysis from other ethical perspectives and principles enriches the reflection.

The article proposes an ethical assessment from the ethical principles of responsibility, justice, safety and autonomy (41) in the framework of AI in LM. The most relevant aspects are listed below, constituting a reflection guide for interested parties such as developers, medical and political decision-makers, producers and civil society.

4.1. Society, AI and LM

The use of AI in medicine invites reflection on the role of technology in society and its use by humans. Is it relevant to implement AI in LM? The introduction of such technology is obviously tempting, however, the contextual analysis of each clinical laboratory and the objective evaluation of the volume of activity, level of care and health services it provides should be the basis for the decision to adhere to the technology. The technological imperative should not override human logic in its implementation. Otherwise, the costs and delay in the implementation process could deteriorate healthcare and neglect the care of patients and the population in general.

It is essential to remember that the inclusion of this technology in LM is not equivalent to an increase in the quality of services. The technology is only one component within the clinical laboratory quality management system and its erroneous adoption may jeopardize the purpose of QL and the quality of services, which can only be achieved through a continuous process of improvement of the work in the laboratory and the establishment of programs and spaces for this purpose (1).

In the framework of the responsible implementation of AI technology in LM, the inclusion of stakeholders is a priority. What does society think and know about this change in health care?

Recent research shows that patients (42) and physicians (43) express high satisfaction with the use of AI in the healthcare process. Patients express a high preference for virtual assistants, online consultations and electronic correspondence. Immersed in the immediacy and urgency of postmodern life, patients highlight advantages such as avoiding queues, long waiting times and being able to combine medical diagnosis with other activities. It should be investigated whether this preference is maintained in the diagnosis of catastrophic diseases such as cancer and neurodegenerative diseases, to mention a few. In the case of the physician, reasons such as increased efficiency in the diagnostic process are sufficient for its use, although it is worth mentioning that there is concern and fear of being displaced by the robot in certain functions.

The introduction of AI in LM should be done gradually, with the participation of all stakeholders, especially patients. There is still a wide lack of knowledge of AI on the part of civil society that must be overcome simultaneously with its implementation. The joint participation of physicians, clinical laboratory specialists, technologists, AI developers and patients is decisive in the social acceptance of this technology.

Timely and collegial communication on the modes of action, accessibility, benefits, risks, reparations for harm and future of LM AI should be brought to the table for dialogue with all stakeholders, including health policy makers and political leaders. Progress in this regard is scarce at the global level, although the main blocks involved such as China, the United States and the European Union show local progress on the governance of such technology in society (44) and the existence of ethical codes for the use of AI in different areas of human life (45).

4.2. AI - professional - patient relationship

A controversial aspect in the introduction of AI to LM is the robot-professional-patient relationship. Clarification of roles and functions in the diagnostic process should be a priority to establish the mechanisms, functions and tasks of the participants during medical diagnosis.

There is concern about the supplanting of the human specialist by the robot in the diagnostic process (46). The supposed humanization of the laboratory specialist’s work may imply detraining and loss of competencies, a cessation of his functions and the exclusion of man using the machine, generating a form of discrimination in health institutions and an ethical dilemma in the use of technology: innovation-unemployment or not innovating-employment. The right middle ground would be the solution.

Currently the use of weak AI in the LM setting makes the supervision of the medical technologist necessary. Although the autonomy of the AI robot is questionable, the role of human supervision is becoming less and less essential. The use of strong intelligent systems with a high degree of autonomy (if it can be called that) in areas such as pathological diagnostics is food for thought.

The authors conceive of AI as a tool in the diagnostic process, with the medical professional being the main element in decision-making based on the information provided by the machine and the joint evaluation of the patient. Possible contradictions between the decision of the machine and the LM specialist must be resolved by human rationality with the help of the machine’s precision. The issuance of the final report of results, although performed with the support of technology, must be concluded by the human component.

Regarding the robot-professional-patient relationship, an interdependence of this triad is established where the roles and functions of each participant in the diagnosis should be defined. The robot should act as an advisor-consultant in the diagnostic process; however, in some cases it is already proclaimed as a decision-maker in the medical diagnosis. The roles of consultant-decision maker should be assigned according to the capacity to generate an accurate answer, the degree of independence and the patient’s level of confidence in the diagnosis. This situation brings us closer to a dichotomy between the paternalism of the machine versus the autonomy of the patient and the technologist. Will the robot be able to put itself in the position of the other during the dialogue with the patient, relatives and medical staff?

Some research has shown that patient confidence in the diagnosis of difficult prognostic diseases is higher if performed by a real physician than an AI system or an LM AI specialist (47). This confidence increases if the patient chooses his or her physician and the physician decide to use AI for diagnosis (48). In addition, the level of trust has been found to be influenced by other social variables such as the degree of education, the type of pathology to be treated, and the perception of the effectiveness of other AIs such as commonly used smart devices (49). Some ways to improve trust in AI include making diagnostic results more robust, increasing transparency in the operation of the technology, and promoting equity in its use (50). However, the conflict of patients who do not trust AI for medical diagnosis may arise. Will diagnostic alternatives exist for these patients so that the patient is free to choose his or her medical care?

One impact of the use of AI in LM is the de-emphasis of the clinical method, coupled with the overuse of technology in medical diagnosis (51). This has had repercussions in the distancing of the patient, limiting the understanding of the singular phenomenon that is the process of health-illness in the human being. For some authors, the relationship assumed with AI by the medical community is erroneous. They propose a transition that is the opposite of the current one, so as to free the medical professional from administrative work and allow him to fraternize to a greater extent with the patient (52).

Another aspect under debate is the possibility of humanizing the robot from an ethical and emotional perspective (53). Endowing the robot with morals implies the capacity for reflection on what is right and good, for understanding the world in the depth of acts and for the integration of phenomena, including human subjectivity. The moral robot must have an awareness of itself and of the world, making its actions have an ethical component, avoiding situations that violate human dignity. Will it be possible to build a moral robot?

There are research projects working in this direction, developing complex systems of learning and artificial consciousness (54). Some of the best known are Project Consciousness (MIRI), Self-Aware AI (Google DeepMind), Neural Episodic Control (DeepMind), NEuROCOG (uE), Neural Simulators (Anthropic), AI Self-Consciousness (MIT), among others. The humanization of the robot from the emotions must show empathy and commitment to the patient and his state of health, so that the patient feels identified with the robot and actively collaborates in the diagnosis, an aspiration with technological limitations up to the present time.

Some philosophers and scientists have suggested that AI can contribute to the moral improvement of man (55). The moral robot can make the physician reflect on conflicts or moral dilemmas in his work environment and suggest the fairest course of action by integrating the biopsychosocial perspective of the patient. At the same time, the robot can identify present or latent ethical conflicts in the practice of the profession, which are considered in decision making. Currently, there is AI applied to organ donation and decision-making during donor allocation (56) in a fair and scientifically sound manner.

The moral humanization of the robot suggests several questions to be answered by bioethicists, developers, and other stakeholders, anticipating situations that could be controversial in the future.

  • • Will it be possible to moralize (through programming) a robot by instilling the web of human values and ethical codes?

  • • Are there technological bases to moralize a robot?

  • • What should be the level of complexity of the robot’s ethical reflection?

  • • What are the ethical-legal implications of the moral conscience of machines?

In the authors’ view the use of AI should not marginalize the contact of human beings and the perception of warmth, kindness, protection and trust offered by the face-to-face encounter of healthcare personnel with the patient. Avoiding the depersonalization of the patient by not being able to share his or her emotions and feelings in this diagnostic process requires a solution to this problem. The robot-specialist-patient relationship should be based on a dialogic interaction that favors the autonomy of human beings and the heteronomy of the robot. The limits of the robot’s actions should be framed in such a way that the final decisions are favored by a deep communication process between the health professional and the patient, with the intervention of the robot as the main advisor in the diagnostic process. The relationship should limit technological paternalism on the part of the machine and favor the freedom of human beings in decision making.

4.3. Accessibility to LM with AI

One aspect to be resolved in society is the growing gap in accessibility to health services (costs and access to technology). The implementation of AI in LM implies a process of technology transfer that involves a high investment of resources by the health system. At this point, some relevant questions arise, such as:

Will the population with fewer economic resources have the possibility of using these technological advances in the diagnosis of diseases? To what extent will the costs of medical services increase due to the use of AI and whether these can be assumed by the health system? Will taxes on citizens increase due to this improvement in medical services? These questions should be analyzed so that the investment does not become an additional burden for the government, or the citizenry and public health policies should include a plan to manage this situation.

The technological gap between developed and developing countries, related to technology transfer, should also be considered. The digital divide is an undeniable reality between North and South, rich and poor, and is a barrier to the equitable implementation of AI in LM. There are still differences in equity in technological access between North and South in the use of ICTs such as the Internet, digital communication and others, a situation that was experienced during the COVID-19 pandemic (57) and the problems that arose in education and health care. The implementation of AI, without first having solved the digital divide, marginalizes the poorest and increases inequity in the use of technology. If only the richest have access to technology, justice in health care will be tainted and the human right to health will be deprived for the sake of medical technicality. The promises of equity in access to ICT and technology transfer between North and South have not been fulfilled. Will the situation be different for LM AI?

The solution must include a technology transfer program so that developing countries are able to assimilate the technology in a smooth, developed-country oriented manner. Only international cooperation, technology transfer in a fair manner between AI developing and consuming countries can alleviate the growing technology gap, enabling an equitable hardware and software base for the development of digital medicine.

4.4. Safety of AI in LM

Although AI in LM has shown a high degree of certainty and reliability, such technology has biases inherent to the human condition (cognitive biases), input data, information processing, and machine learning that make the medical diagnostic process fallible.

The presence of cognitive biases in AI algorithms is inherent to human creation. To think that the computer algorithm is alien to human subjectivity is a myth that must be reversed in the scientific community and the general population. Some research has raised concerns about the impact of cognitive biases in AI (58). Different AI systems have been found to include interaction, latent, and selection biases causing unethical situations such as favoritism, discrimination, and abuse of power among others. To reverse this myth is to be equitable and generate parity in AI populations, criteria and errors.

Numerous errors in AI and its applications have been documented such as confusion in facial recognition systems, object identification or interaction with intelligent assistants on the web (59). The Tay chatbot was unveiled via Twitter in 2016. Even though the Tay chatbot was not programmed to make racist or discriminatory comments, it was capable of belittling women, altering the events of September 11 in the United States or encouraging the genocide committed by Hitler during World War II. The responses triggered an apology from the company that created it, which immediately deactivated the chatbot. This situation makes us reflect on the reliability and predictability of AI systems, showing that they are not infallible and unexpected events with unpredictable and unpleasant consequences can occur. It is important that developers can foresee how to avoid or solve these situations during implementation.

AI is being applied to LM in leaps and bounds and its results have been satisfactory. AI models have shown sensitivity and specificity values, positive and negative prognostic value similar or slightly superior to human performance in the diagnosis of different pathologies (60). However, there are still aspects that bias the results (61) and should be considered for the use of this technology in humans.

Firstly, the experimental nature of the technology in the field of LM and the existence of technological difficulties in the programming process are mentioned, introducing relevant cognitive biases (58). Difficulties are also noted in the quality and storage of primary data (especially images) and their processing (62). It is suggested that the volume of data is very high and the storage and processing capacity is insufficient (60), which can generate errors in the information output. In addition, there are several uncertainties surrounding its operation, especially the dynamics of the learning process and the generation of the black box effect in the robot’s responses.

Data quality is relevant in the robot training and learning process. Data quality problems related to registration and source selection in AI development persist at present (63). Data representativeness implies not only a high volume, but also an appropriate selection of data so that they are relevant to the health problem to be modeled. This may suggest the use of local data for the solution of health problems specific to the geographic region and the population residing in that area.

Some studies assert that the exclusion of sociodemographic variables such as biological sex, gender and skin color in AI models for disease diagnosis may be one of the most common causes of inaccuracy and error in the technology (64). Including psychosocial determinants of disease states in AI algorithms is one of the main challenges for developers.

Ensuring data quality is also about being thorough about it (representativeness and relevance), including as many variables as possible, avoiding the discriminatory biases of the technology. Obtaining a vast and relevant volume of data will allow a broader and more flexible learning of the robot, together with a decrease in bias and an increase in the indicators of reliability, validity, sensitivity and specificity in the diagnosis. In addition, a deep and personalized approach to the health-disease process is achieved, revolutionizing the practice of evidence-based medicine.

Some studies report similar results in diagnosis when using different AI learning methods such as convolutional neural network architecture, classifiers such as support vector machine or random forest(60). However, others posit the superiority of one method over another (65). This situation is more complex when the data to be analyzed corresponds to genetic sequences or laboratory images. Currently there is no certainty about the best AI learning algorithm and which one to use depending on the type of diagnosis to be generated (61). The answer to the question: what is the ideal method for using an AI tool in healthcare? This is a conflict to be resolved for developers and specialists.

There is also uncertainty surrounding the response generated by the robot. For scientists, the “reasoning” of the software and the choice of one response or another during learning remains a mystery (62). Knowing how and why the robot selects a response is one of the most important elements in predicting its performance and avoiding or mitigating unintended consequences. AI can make guesses from data, but it cannot explain how it arrived at those guesses. The mystery of the black box in the AI response process flies in the face of transparency and plausibility in decision making in the healthcare process. The immediate solution would be a step forward in the solution of ethical conflicts in this field, which generates difficulties in the diagnostic process by the human specialist (66).

4.5. Data confidentiality, AI and LM

The application of intelligent systems in LM entails the recording, storage and use of a high volume of personal and patient health status data through AI systems and Big Data (5). There is a great concern from stakeholders to safeguard data privacy and intimacy. Some of the most controversial issues are the use and protection of data generated during the diagnostic process.

The protection of stored data should be a policy of the healthcare institution itself so that accessibility and privacy are guaranteed to patients and authorized medical personnel. The data record should be contained in professional software with powerful anonymization and digital security systems, so as to minimize unauthorized access to the system, breaches in the information system and thus the loss of patient privacy (67).

Some authors consider the donation of data to the health system with a view to its improvement and enhancement as a moral obligation of the patients (68). This collectivist vision maximizes the duty towards the community and minimizes the individuality of the patient, which generates a misunderstanding between professionals and patients. It is also proposed to dispense with informed consent for the use of stored data when access to the patient is not possible or the costs involved are insufferable. These considerations leave the patient unprotected against possible situations that may violate privacy or promote discrimination based on his or her health condition and the misuse of data by employers, insurance companies or other entities in society.

For some authors, the health institution should explicitly state its intentions with the data at the time of collection, as well as in the near future. Dr. Enrico Coiera considers that it is not enough to have an efficient health system if it then sells its patients’ data to the highest bidder and the patient’s trust in the health system is lost (69). According to Larson and collaborators, in 2018 journalists from the New York Times unveiled a commercial relationship between Memorial Sloan-Kettering Cancer Center and Paige.AI which consisted of giving access to millions of histological slices stored in their databases in exchange for 9% in the partnership (68). Evidently the economic interests and the purpose of the data have an unethical underpinning, which calls into question the use of the data and the breach of patient confidentiality. Paradoxically, in some regions of the world, such as the United States, the sale of customer and patient data is standardized.

Possible retributions to patients in the event of financial gain from the use of this data should also be set out. Abuses related to patient benefits were experienced during the iconic Henrietta Lacks case (70) and may be repeated in the context of LM AI. The use of an informed consent that the patient must accept may be an alternative ethical solution. Its wording should be understandable by the patient so that he or she can give free consent to data handling.

Finally, the possibility of a strong AI robot showing autonomy in the decision of data handling must be contemplated. How to operate in this situation and what ethical and legal regulations should ensure the good use of information by an AI robot? Developers should have the final say by limiting this potential risk through restrictive programming of the robot.

4.6. Legal aspects of AI and medicine

The implementation of AI in health care and LM must be accompanied by a solid legal framework (16), so that it protects both developers and users and outlines ways of dealing with potential conflicts.

Legal experts are currently debating the legal future of strong AI, which casts a shadow over the landscape. These limitations include the failure to define a legal status for the robot and the granting of a legal status where liability for damages or harm resulting from decision-making is assumed. For some jurists, criminal liability applied to the robot is similar to criminal liability applied to corporate entities, setting a relevant precedent in the assignment of civil and criminal liability to non-human entities (54). The assignment of legal liability is relevant in situations in the medical context such as malpractice, medical malpractice or negligence, where the harm to the patient is palpable and legal action against those involved is frequent. These situations must be differentiated to clarify possible damages to the integrity of the patient or the medical specialist. The current paradox lies in whether or not to grant legal personality to these robots, which are not yet autonomous but could become so in the near future.

This legal liability puts in the spotlight the robot itself, its developers, the medical company that uses them, the marketers and the physician who made the diagnosis. Who will assume responsibility for the harm caused to the patient in the medical diagnosis? Perhaps the gradual and proportional approach to the solution of this dilemma should be reasoned from the current state of technology and the degree of participation of the parties in the medical diagnosis. Expert opinions speak of the necessary modification of the current legal landscape with a view to contextualizing and including AI in the legal framework (9). Resolving the paradox of product liability or conscious autonomy of the robot is essential to resolve potential conflicts in the field of medicine and AI.

5. Conclusions

AI is a technology that enhances human disease diagnosis. The introduction of AI in LM should be oriented to man as an end and the implementation of the human right to health. The implementation of AI in medical diagnosis should be guided by a profound ethical reflection, so as not to distort the essence of this proposal, or to mask economic or hegemonic purposes.

The inclusion of AI in LM is in a second stage, where the development of software and hardware is essential to achieve the goal of efficient, valid and reliable medical diagnosis. Its dizzying development and incipient application in LM merit a profound ethical reflection on its use, emphasizing risks, benefits and ethical and legal implications. In the implementation of AI in LM, human reasoning should predominate as a decision-maker in medical diagnostic processes, with technology being an element of support in decision-making. The development of a coherent and inclusive legal framework for AI in LM is crucial to avoid situations that may harm the physical integrity, confidentiality and morality of the parties involved.

The presence of a set of ethical conflicts in the field of AI calls for caution and responsibility in its use in the interest of preserving human dignity. Precaution should be oriented toward projecting the risks of its use and planning how to minimize, mitigate and control risks and undesirable events. Responsibility includes the participation of all stakeholders in the implementation of IA, guiding its use along the path of good in the present and the near future.

Referencias

  • 1
    Sciacovelli L, Padoan A, Aita A, Basso D, Plebani M. Quality indicators in laboratory medicine: state-of-the-art, quality specifications and future strategies. Clin Chem Lab Med CCLM [Internet]. 2023 [citado 19 de marzo de 2024]; 61(4):688-95. Disponible en: https://www.degruyter.com/document/doi/10.1515/cclm-2022-1143/html Links
  • 2
    Lippi G, Plebani M. A modern and pragmatic definition of Laboratory Medicine. Clin Chem Lab Med CCLM [Internet]. 2020 [citado 22 de febrero de 20204]; 58(8):1171-1171. Disponible en: https://www.degruyter.com/document/doi/10.1515/cclm-2020-0114/html Links
  • 3
    Plebani M. Quality in laboratory medicine and the journal: walking together. Clin Chem Lab Med CCLM [Internet]. 2023 [citado 19 de marzo de 2024]; 61(5):713-20. Disponible en: https://www.degruyter.com/document/doi/10.1515/cclm-2022-0755/html Links
  • 4
    Gruson D. Big Data, inteligencia artificial y medicina de laboratorio: la hora de la integración. Adv Lab Med [Internet]. 2021 [citado 19 de marzo de 2024]; 2(1):5-7. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10197294/ Links
  • 5
    Herman DS, Rhoads DD, Schulz WL, Durant TJS. Artificial intelligence and mapping a new direction in laboratory medicine: a review. Clin Chem [Internet]. 2021 [citado 22 de febrero de 2024]; 67(11):1466-82. Disponible en: https://doi.org/10.1093/clinchem/hvab165 Links
  • 6
    El Nahhas OSM, Loeffler CML, Carrero ZI, van Treeck M, Kolbinger FR, Hewitt KJ. Regression-based Deep-Learning predicts molecular biomarkers from pathology slides. Nat Commun [Internet]. 2024 [citado 19 de marzo de 2024]; 15:1253. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858881/ Links
  • 7
    Briganti G, Le Moine O. Artificial Intelligence in Medicine: Today and Tomorrow. Front Med [Internet]. 2020 [citado 19 de marzo de 2024]; 7. Disponible en: https://doi.org/10.3389/fmed.2020.00027 Links
  • 8
    Pennestrì F, Banfi G. Artificial intelligence in laboratory medicine: fundamental ethical issues and normative key-points. Clin Chem Lab Med CCLM [Internet]. 2022 [citado 22 de febrero de 2024]; 60(12):1867-74. Disponible en: https://doi.org/10.1515/cclm-2022-0096 Links
  • 9
    González Arencibia M, Martínez Cardero D. Dilemas éticos en el escenario de la inteligencia artificial. Econ Soc [Internet]. 2020 [citado 19 de marzo de 2024]; 25(57):93-109. Disponible en: http://www.scielo.sa.cr/scielo.php?script=sci_abstract&pid=S2215-34032020000100093&lng=en&nrm=iso&tlng=es Links
  • 10
    Zhang C, Lu Y. Study on artificial intelligence: The state of the art and future prospects. J Ind Inf Integr [Internet]. 2021 [citado 19 de marzo de 2024]; 23:100224. Disponible en: https://www.sciencedirect.com/science/article/pii/S2452414X21000248 Links
  • 11
    Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol [Internet]. 2019 [citado 19 de marzo de 2024]; 28(2):73-81. Disponible en: https://doi.org/10.1080/13645706.2019.1575882 Links
  • 12
    Avila-Tomás JF, Mayer-Pujadas MA, Quesada-Varela VJ. La inteligencia artificial y sus aplicaciones en medicina I: introducciones antecedentes a la IA y robótica. Aten Primaria [Internet]. 2020 [citado 22 de febrero de 2024]; 52(10):778-84. Disponible en: https://linkinghub.elsevier.com/retrieve/pii/S0212656720301451 Links
  • 13
    Porcelli AM. Inteligencia Artificial y la Robótica: sus dilemas sociales, éticos y jurídicos. Derecho Glob Estud Sobre Derecho Justicia [Internet]. 2020 [citado 19 de marzo de 2024]; 6(16):49-105. Disponible en: http://www.derechoglobal.cucsh.udg.mx/index.php/DG/article/view/286 Links
  • 14
    Koteluk O, Wartecki A, Mazurek S, Kołodziejczak I, Mackiewicz A. How do machines learn? Artificial intelligence as a new era in medicine. J Pers Med [Internet]. 2021 [citado 19 de marzo de 2024]; 11(1):32. Disponible en: https://www.mdpi.com/2075-4426/11/1/32 Links
  • 15
    Gates B. gatesnotes.com. [citado 19 de marzo de 2024]. The Age of AI has begun. Disponible en: https://www.gatesnotes.com/The-Age-of-AI-Has-Begun Links
  • 16
    Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ [Internet]. 2023 [citado 22 de febrero de 2024]; 23(1):689. Disponible en: https://doi.org/10.1186/s12909-023-04698-z Links
  • 17
    Haymond S, McCudden C. Rise of the machines: artificial intelligence and the clinical laboratory. J Appl Lab Med [Internet]. 2021 [citado 22 de febrero de 2024]; 6(6):1640-54. Disponible en: https://doi.org/10.1093/jalm/jfab075 Links
  • 18
    Naugler C, Church DL. Automation and artificial intelligence in the clinical laboratory. Crit Rev Clin Lab Sci [Internet]. 2019 [citado 22 de febrero de 2024]; 56(2):98-110. Disponible en: https://doi.org/10.1080/10408363.2018.1561640 Links
  • 19
    Holland I, Davies JA. Automation in the life science research laboratory. Front Bioeng Biotechnol [Internet]. 2020 [citado 19 de marzo de 2024]; 8. Disponible en: https://www.frontiersin.org/articles/10.3389/fbioe.2020.571777 Links
  • 20
    Dobrijević D, Vilotijević-Dautović G, Katanić J, Horvat M, Horvat Z, Pastor K. Rapid triage of children with suspected COVID-19 using laboratory-based machine-learning algorithms. Viruses [Internet]. 2023 [citado 22 de febrero de 2024]; 15(7):1522. Disponible en: https://www.mdpi.com/1999-4915/15/7/1522 Links
  • 21
    Wang H, Wang H, Zhang J, Li X, Sun C, Zhang Y. Using machine learning to develop an autoverification system in a clinical biochemistry laboratory. Clin Chem Lab Med CCLM [Internet]. 2021 [citado 19 de marzo de 2024]; 59(5):883-91. Disponible en: https://www.degruyter.com/document/doi/10.1515/cclm-2020-0716/html?lang=en Links
  • 22
    Enko D, Stelzer I, Böckl M, Derler B, Schnedl WJ, Anderssohn P. Comparison of the diagnostic performance of two automated urine sediment analyzers with manual phase-contrast microscopy. Clin Chem Lab Med. 2020; 58(2):268-73. https://doi.org/10.1515/cclm-2019-0919 Links
  • 23
    Acevedo A, Alférez S, Merino A, Puigví L, Rodellar J. Recognition of peripheral blood cell images using convolutional neural networks. Comput Methods Programs Biomed [Internet]. 2019 [citado 19 de marzo de 2024]; 180:105020. Disponible en: https://www.sciencedirect.com/science/article/pii/S0169260719303578 Links
  • 24
    Wang L, Chen X, Zhang L, Li L, Huang Y, Sun Y. Artificial intelligence in clinical decision support systems for oncology. Int J Med Sci [Internet]. 2023 [citado 19 de marzo de 2024]; 20(1):79-86. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9812798/ Links
  • 25
    Poostchi M, Silamut K, Maude RJ, Jaeger S, Thoma G. Image analysis and machine learning for detecting malaria. Transl Res J Lab Clin Med [Internet]. 2018 [citado 19 de marzo de 2024]; 194:36-55. Disponible en: https://www.translationalres.com/article/S1931-5244(17)30333-X/fulltext Links
  • 26
    Zhang ML, Guo AX, Kadauke S, Dighe AS, Baron JM, Sohani AR. Machine learning models improve the diagnostic yield of peripheral blood flow cytometry. Am J Clin Pathol. 2020; 153(2):235-42. Links
  • 27
    Bailey AL, Ledeboer N, Burnham CAD. Clinical microbiology is growing up: the total laboratory automation revolution. Clin Chem. 2019; 65(5):634-43. Links
  • 28
    Ng DP, Zuromski LM. Augmented human intelligence and automated diagnosis in flow cytometry for hematologic malignancies. Am J Clin Pathol. 2021; 155(4):597-605. Links
  • 29
    Undru TR, Uday U, Lakshmi JT, Kaliappan A, Mallamgunta S, Nikhat SS. Integrating artificial intelligence for clinical and laboratory diagnosis - a review. Mædica [Internet]. 2022 Jun [citado 22 de febrero de 2024]; 17(2):420-6. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9375890/ Links
  • 30
    Zhang YF, Zhou C, Guo S, Wang C, Yang J, Yang ZJ. Deep learning algorithm-based multimodal MRI radiomics and pathomics data improve prediction of bone metastases in primary prostate cancer. J Cancer Res Clin Oncol [Internet]. 2024 [citado 19 de marzo de 2024]; 150(2):78. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10844393/ Links
  • 31
    Zhong R, Gao T, Li J, Li Z, Tian X, Zhang C. The global research of artificial intelligence in lung cancer: a 20-year bibliometric analysis. Front Oncol [Internet]. 2024 [citado 19 de marzo de 2024]; 14:1346010. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10869611/ Links
  • 32
    Stegmüller T, Abbet C, Bozorgtabar B, Clarke H, Petignat P, Vassilakos P. Self-supervised learning-based cervical cytology for the triage of HPV-positive women in resource-limited settings and low-data regime. Comput Biol Med [Internet]. 2024 [citado 19 de marzo de 2024]; 169:107809. Disponible en: https://www.sciencedirect.com/science/article/pii/S001048252301274X Links
  • 33
    Guerra A, Orton MR, Wang H, Konidari M, Maes K, Papanikolaou NK. Clinical application of machine learning models in patients with prostate cancer before prostatectomy. Cancer Imaging [Internet]. 2024 [citado 19 de marzo de 2024]; 24:24. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10854130/ Links
  • 34
    Lv Q, Liu Y, Sun Y, Wu M. Insight into deep learning for glioma IDH medical image analysis: A systematic review. Medicine (Baltimore) [Internet]. 2024 [citado 19 de marzo de 2024]; 103(7):e37150. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10869095/ Links
  • 35
    Lin H, Ni L, Phuong C, Hong JC. Natural Language Processing for Radiation Oncology: Personalizing Treatment Pathways. Pharmacogenomics Pers Med [Internet]. 2024 [citado 19 de marzo de 2024]; 17:65-76. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874185/ Links
  • 36
    Rietjens JAC, Griffioen I, Sierra-Pérez J, Sroczynski G, Siebert U, Buyx A. Improving shared decision-making about cancer treatment through design-based data-driven decision-support tools and redesigning care paths: an overview of the 4D PICTURE project. Palliat Care Soc Pract [Internet]. 2024 [citado 19 de marzo de 2024]; 18:26323524231225249. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10863384/ Links
  • 37
    Saif-Ur-Rahman K, Islam MS, Alaboson J, Ola O, Hasan I, Islam N. Artificial intelligence and digital health in improving primary health care service delivery in LMICs: A systematic review. J Evid-Based Med [Internet]. 2023 [citado 20 de marzo de 2024]; 16(3):303-20. Disponible en: https://onlinelibrary.wiley.com/doi/abs/10.1111/jebm.12547 Links
  • 38
    Baron JM. Artificial intelligence in the clinical laboratory: an overview with frequently asked questions. Clin Lab Med [Internet]. 2023 [citado 20 de marzo de 2024]; 43(1):1-16. Disponible en: https://www.labmed.theclinics.com/article/S0272-2712(22)00060-9/abstract Links
  • 39
    Sloane EB, J. Silva R. Chapter 83 - Artificial intelligence in medical devices and clinical decision support systems. In: Iadanza E, editor. Clinical Engineering Handbook (Second Edition) [Internet]. Academic Press; 2020 [citado 20 de marzo de 2024]:556-68. Disponible en: https://www.sciencedirect.com/science/article/pii/B9780128134672000845 Links
  • 40
    OMS. Estrategia mundial sobre salud digital 2020-2025 [Internet]. Ginebra: OMS; 2021 [citado 20 de marzo de 2024]. Disponible en: https://iris.who.int/bitstream/handle/10665/344251/9789240027572-spa.pdf?sequence=1&isAllowed=y Links
  • 41
    Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics [Internet]. 2021 [citado 20 de marzo de 2024]; 22(1):14. Disponible en: https://doi.org/10.1186/s12910-021-00577-8 Links
  • 42
    Meyer AND, Giardina TD, Spitzmueller C, Shahid U, Scott TMT, Singh H. Patient perspectives on the usefulness of an artificial intelligence-assisted symptom checker: cross-sectional survey study. J Med Internet Res [Internet]. 2020 [citado 20 de marzo de 2024]; 22(1):e14679. Disponible en: https://www.jmir.org/2020/1/e14679 Links
  • 43
    Wadhwa V, Alagappan M, Gonzalez A, Gupta K, Brown JRG, Cohen J. Physician sentiment toward artificial intelligence (AI) in colonoscopic practice: a survey of US gastroenterologists. Endosc Int Open [Internet]. 2020 [citado 20 de marzo de 2024]; 08(10):E1379-84. Disponible en: http://www.thieme-connect.de/DOI/DOI?10.1055/a-1223-1926 Links
  • 44
    Pita EV. La UNESCO y la gobernanza de la inteligencia artificial en un mundo globalizado. La necesidad de una nueva arquitectura legal. Anu Fac Derecho [Internet]. 2021 [citado 20 de marzo de 2024]; (37):273-302. Disponible en: https://revista-afd.unex.es/index.php/AFD/article/view/1028 Links
  • 45
    Hagendorff T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach [Internet]. 2020 [citado 22 de febrero de 2024]; 30(1):99-120. Disponible en: https://doi.org/10.1007/s11023-020-09517-8 Links
  • 46
    Grunhut J, Wyatt AT, Marques O. Educating future physicians in artificial intelligence (AI): an integrative review and proposed changes. J Med Educ Curric Dev [Internet]. 2021 [citado 20 de marzo de 2024]; 8:23821205211036836. Disponible en: https://doi.org/10.1177/23821205211036836 Links
  • 47
    Juravle G, Boudouraki A, Terziyska M, Rezlescu C. Chapter 14 - Trust in artificial intelligence for medical diagnoses. In: Parkin BL, editor. Progress in Brain Research [Internet]. Elsevier; 2020 [citado 20 de marzo de 2024]:263-82. (Real-World Applications in Cognitive Neuroscience; vol. 253). Disponible en: https://www.sciencedirect.com/science/article/pii/S0079612320300819 Links
  • 48
    Nelson CA, Pérez-Chada LM, Creadore A, Li SJ, Lo K, Manjaly P. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study. JAMA Dermatol [Internet]. 2020 [citado 20 de marzo de 2024]; 156(5):501-12. Disponible en: https://doi.org/10.1001/jamadermatol.2019.5014 Links
  • 49
    Yakar D, Ongena YP, Kwee TC, Haan M. Do people favor artificial intelligence over physicians? A survey among the general population and their view on artificial intelligence in medicine. Value Health [Internet]. 2022 [citado 20 de marzo de 2024]; 25(3):374-81. Disponible en: https://www.sciencedirect.com/science/article/pii/S1098301521017411 Links
  • 50
    Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res [Internet]. 2020 [citado 20 de marzo de 2024];22(6):e15154. Disponible en: https://www.jmir.org/2020/6/e15154 Links
  • 51
    Trainini J, Hornos Barberis E, Aranovich R. Aportes a la comprensión de la problemática actual de la trilogíamédico-paciente-tecnología. Rev Argent Cardiol [Internet]. 2023 [citado 20 de marzo de 2024]; 91(4):298-301. Disponible en: https://rac.sac.org.ar/index.php/rac/article/view/214/608 Links
  • 52
    DiGiorgio AM, Ehrenfeld JM. Artificial Intelligence in Medicine & ChatGPT: De-Tether the physician. J Med Syst [Internet]. 2023 [citado 20 de marzo de 2024]; 47(1):32. Disponible en: https://doi.org/10.1007/s10916-023-01926-3 Links
  • 53
    Arnold MH. Teasing out artificial intelligence in medicine: an ethical critique of artificial intelligence and machine learning in medicine. J Bioethical Inq [Internet]. 2021 [citado 22 de febrero de 2024]; 18(1):121-39. Disponible en: https://doi.org/10.1007/s11673-020-10080-1 Links
  • 54
    Blanc CA. “El despertar de las máquinas”: Reflexiones sobre el estatus moral y jurídico de la Inteligencia Artificial. Rev Int Pensam Político [Internet]. 2023 Dec 22 [citado 20 de marzo de 2024]; 18:213-42. Disponible en: https://upo.es/revistas/index.php/ripp/article/view/8529 Links
  • 55
    Rueda J. ¿Automatizando la mejora moral humana? La inteligencia artificial para la ética: Nota crítica sobre Lara, F. y Savalescu, J (eds.) (2021), Más (que) humanos. Biotecnología, inteligencia artificial y ética de la mejora. Madrid: Tecnos. Daimon Rev Int Filos [Internet]. 2023 [citado 20 de marzo de 2024];(89):199-209. Disponible en: https://revistas.um.es/daimon/article/view/508771 Links
  • 56
    Sinnott-Armstrong W, Skorburg J (Gus) A. How AI can aid bioethics. J Pract Ethics [Internet]. 2021 [citado 20 de marzo de 2024]; 9(1). Disponible en: https://journals.publishing.umich.edu/jpe/article/id/1175/ Links
  • 57
    Beaunoyer E, Dupéré S, Guitton MJ. COVID-19 and digital inequalities: Reciprocal impacts and mitigation strategies. Comput Hum Behav [Internet]. 2020 [citado 20 de marzo de 2024]; 111:106424. Disponible en: https://www.sciencedirect.com/science/article/pii/S0747563220301771 Links
  • 58
    Ramírez GM. Problemática antropológica detrás de la discriminación generada a partir de los algoritmos de la inteligencia artificial. Med Ética [Internet]. 2023 [citado 20 de marzo de 2024]; 34(2):429-80. Disponible en: https://revistas.anahuac.mx/index.php/bioetica/article/view/1669 Links
  • 59
    Chen JH, Verghese A. Planning for the known unknown: machine learning for human healthcare systems. Am J Bioeth [Internet]. 2020 [citado 20 de marzo de 2024]; 20(11):1-3. Disponible en: https://doi.org/10.1080/15265161.2020.1822674 Links
  • 60
    Cui M, Zhang DY. Artificial intelligence and computational pathology. Lab Invest [Internet]. 2021 [citado 20 de marzo de 2024]; 101(4):412-22. Disponible en: https://www.sciencedirect.com/science/article/pii/S0023683722006468 Links
  • 61
    Corti C, Cobanaj M, Dee EC, Criscitiello C, Tolaney SM, Celi LA. Artificial intelligence in cancer research and precision medicine: Applications, limitations and priorities to drive transformation in the delivery of equitable and unbiased care. Cancer Treat Rev [Internet]. 2023 [citado 20 de marzo de 2024]; 112:102498. Disponible en: https://www.sciencedirect.com/science/article/pii/S0305737222001748 Links
  • 62
    Daneshjou R, Smith MP, Sun MD, Rotemberg V, Zou J. Lack of transparency and potential bias in artificial intelligence data sets and algorithms: a scoping review. JAMA Dermatol [Internet]. 2021 [citado 20 de marzo de 2024]; 157(11):1362-9. Disponible en: https://doi.org/10.1001/jamadermatol.2021.3129 Links
  • 63
    Wang H, Fu T, Du Y, Gao W, Huang K, Liu Z. Scientific discovery in the age of artificial intelligence. Nature [Internet]. 2023 [citado 20 de marzo de 2024]; 620(7972):47-60. Disponible en: https://www.nature.com/articles/s41586-023-06221-2 Links
  • 64
    Cirillo D, Catuara-Solarz S, Morey C, Guney E, Subirats L, Mellino S. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. Npj Digit Med [Internet]. 2020 [citado 20 de marzo de 2024]; 3(1):1-11. Disponible en: https://www.nature.com/articles/s41746-020-0288-5 Links
  • 65
    Sarker IH. Machine Learning: algorithms, real-world applications and research directions. SN Comput Sci [Internet]. 2021 [cited 2024 Mar 21]; 2(3):160. Disponible en: https://doi.org/10.1007/s42979-021-00592-x Links
  • 66
    Jussupow E, Spohrer K, Heinzl A, Gawlitza J. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Inf Syst Res [Internet]. 2021 [citado 20 de marzo de 2024]; 32(3):713-35. Disponible en: https://pubsonline.informs.org/doi/abs/10.1287/isre.2020.0980 Links
  • 67
    Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics [Internet]. 2021 [citado 20 de marzo de 2024]; 22(1):122. Disponible en: https://doi.org/10.1186/s12910-021-00687-3 Links
  • 68
    Larson DB, Magnus DC, Lungren MP, Shah NH, Langlotz CP. Ethics of using and sharing clinical imaging data for artificial intelligence: a proposed framework. Radiology [Internet]. 2020 [citado 20 de marzo de 2024]; 295(3):675-82. Disponible en: https://pubs.rsna.org/doi/full/10.1148/radiol.2020192536 Links
  • 69
    Coiera E. Depender de los datos: la gran debilidad de la IA moderna. Rev Innova Salud Digit [Internet]. 2020 [citado 20 de marzo de 2024]; (1):23-6. Disponible en: https://www1.hospitalitaliano.org.ar/landing/innova-salud-digital/sites/default/files/2022-09/11_RevistaInnovaSaludDigitalN1_2020v2.pdf Links
  • 70
    Baptiste D, Caviness-Ashe N, Josiah N, Commodore-Mensah Y, Arscott J, Wilson PR. Henrietta Lacks and America’s dark history of research involving African Americans. Nurs Open [Internet]. 2022 [citado 20 de marzo de 2024]; 9(5):2236-8. Disponible en: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9374392/ Links
Ethical reflections on the impact and challenges of artificial intelligence in laboratory medicine
  • Med. ética  vol. 35n. 4Ethical reflections on the impact and challenges of artificial intelligence in laboratory medicine 0000-0002-8235-4165 Román Collazo Carlos Alberto * 0009-0005-1950-8124 Brenner Jonathan ** 0000-0003-4652-7708 Andrade Campoverde Diego *** Author affiliationPermissions