SciELO - Scientific Electronic Library Online

 
vol.42 número2Desarrollo de Películas, Basadas en Almidón Oxidado de Ipomea Batatas, con Encapsulación de ProteínasEvaluación de Parámetros en Andamios de PLA y PCL a ser Utilizados en Tejidos Cartilaginosos índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Revista mexicana de ingeniería biomédica

versión On-line ISSN 2395-9126versión impresa ISSN 0188-9532

Rev. mex. ing. bioméd vol.42 no.2 México may./ago. 2021  Epub 03-Nov-2021

https://doi.org/10.17488/rmib.42.2.13 

Research articles

Autonomic Face Mask Detection with Deep Learning: an IoT Application

Detección Autonómica de Cubrebocas con Aprendizaje Profundo: una Aplicación del IoT

V. H. Benitez-Baltazar1 

J. H. Pacheco-Ramírez1 

J. R. Moreno-Ruiz1 

C. Nuñez-Gurrola1 

1Universidad de Sonora


ABSTRACT

A new and deadly virus known as SARS-CoV-2, which is responsible for the coronavirus disease (COVID-19), is spreading rapidly around the world causing more than 4 million deaths. Hence, there is an urgent need to find new and innovative ways to reduce the likelihood of infection. One of the most common ways of catching the virus is by being in contact with droplets delivered by a sick person. The risk can be reduced by wearing a face mask as suggested by the World Health Organization (WHO), especially in closed environments such as classrooms, hospitals, and supermarkets. However, people hesitate to use a face mask leading to an increase in the risk of spreading the disease, moreover when the face mask is used, sometimes it is worn in the wrong way. In this work, an autonomic face mask detection system with deep learning and powered by the image tracking technique used for the augmented reality development is proposed as a mechanism to request the correct use of face masks to grant access to people to critical areas. To achieve this, a machine learning model based on Convolutional Neural Networks was built on top of an IoT framework to enforce the correct use of the face mask in required areas as it is requested by law in some regions.

KEYWORDS: Machine learning; COVID - 19; Cyber-Physical Systems; Internet of Things

RESUMEN

Un virus nuevo y letal conocido como SARS-CoV-2, responsable de la enfermedad del coronavirus (COVID-19), se está propagando rápidamente por el mundo y ha provocado más de 4 millones de muertes. Por tal razón, existe una necesidad urgente de encontrar formas nuevas e innovadoras de reducir la probabilidad de infección. Una de las formas usuales de contraer el virus es al estar en contacto con las gotas de saliva de una persona enferma. Este riesgo se puede reducir usando una mascarilla tipo cubrebocas como sugiere la Organización Mundial de la Salud (OMS), especialmente en entornos cerrados como aulas, hospitales y supermercados. Sin embargo, las personas dudan en usar una mascarilla, lo que aumenta el riesgo de propagar la enfermedad, además, cuando se usa la mascarilla, a veces se usa de manera incorrecta. En este trabajo de investigación se propone un sistema autonómico de detección de mascarilla con aprendizaje profundo empoderado con la técnica de detección de imágenes que se utiliza en desarrollos de realidad aumentada como mecanismo para solicitar el correcto uso de mascarilla para permitir el acceso de personas a zonas críticas. Para lograr esto, se construyó un modelo de aprendizaje máquina basado en redes neuronales convolucionales con un enfoque de IoT para hacer cumplir el uso correcto de la máscara facial en las áreas requeridas, tal como lo exige la ley en algunas regiones.

PALABRAS CLAVE: Machine learning; COVID - 19; Cyber-Physical Systems; Internet of Things

INTRODUCTION

Infectious diseases such as Influenza and Coronavirus Disease 19 (COVID-19) cause millions of deaths around the world [1] [2]. The pathogens of such diseases are mainly spread by droplets or aerosols as a result of cough, sneeze, etc [3]. Nowadays, due to the pandemic situation caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) virus, there is an urgent need to limit airborne transmission of COVID-19. The target is to develop and implement effective methods or mechanisms to reduce the number of particles such as viruses from the air. Dissemination of infectious pathogens in crowded areas can be significant and, in many scenarios, the requirement is to implement mechanisms to protect people from being exposed to pathogens [4]. One of the most popular mechanisms is the use of a face mask which in some regions is required by law [5]. The World Health Organization (WHO) issued a guide to the use of face masks as a mechanism to reduce the risk of exposure to the COVID-19 [6]. In the document, the WHO states: “Place the mask carefully, ensuring it covers the mouth and nose, and tie it securely to minimize any gaps between the face and the mask”. The guide aims to help people understand the benefits of using a face mask and the risks associated with not wearing or misusing. Despite the requirements and regulations, people hesitate to wear a mask, or they wear it in the wrong way.

The proliferation of SARS CoV-2 has affected all the countries over the world, and technology has an important role to play in this matter. Today's technology has enabled some areas such as schools to continue in operation, but there are some other areas or jobs that still require face-to-face contact, for instance, hospitals. To reactivate the economy, a certain level of on-site or face-to-face activity is needed [7], but always observing healthcare regulations such as wearing a face mask. The IoT together with AI techniques could work to provide interesting solutions for the COVID-19 pandemic.

Internet of Things (IoT) techniques has been crucial against this pandemic, especially for detecting and tracking infected people. In [8] authors proposed a system using IoT for collecting vital signs from different users. With this system, important data can be collected and analyzed for a better understanding of the symptoms and from the virus. On the other hand, artificial intelligence (AI) has been very important to fight this pandemic. Examples of applications are the algorithms to detect if a person is infected or not with COVID-19. An image classification algorithm is proposed in [9] using deep learning to detect infections in X-ray images. With these algorithms, the images could be processed and improved to help the doctors to have better results in their diagnosis.

To keep track of the people wearing face masks, a surveillance camera could be used for detecting in realtime when someone is using a mask or not, this is possible thanks to the development of AI. In [10] the authors proposed a method for detecting anomalies in surveillance videos using deep learning techniques. One of the advantages of using AI is that a single person does not need to be aware of the place in every moment.

This paper shows the implementation of a face mask detection system, using augmented reality as tracking mechanism to trigger a screen projection on a mobile device which is used to request access to critical areas where the correct use of face mask is a requirement. To achieve this, a machine learning model based on Convolutional Neural Networks is built on top of an IoT framework to enforce the correct use of the face mask in required areas.

Cyber-Physical Systems

Cyber-Physical Systems (CPS) refers to a combination or integration between the physical part and the computations of a system, mainly focused on their interaction [11]. Although this integration is not new, as embedded systems have been around for a while [12], the term CPS is kind of new, in 2006 Helen Gill presented this term and associated this relation with another concept called cybernetics [11].

CPS is growing very fast, and its growth is closely related to the growth of other technologies such as the internet of things and cloud computing. The applications of this kind of system are very wide, some important ones are health care, smart cities, industrial processes, and machine connectivity just to mention a few.

Deep learning

Deep Learning is one of the main subjects of machine learning. Deep learning algorithms are composed of multiple layers to represent learning at different levels; this representation is inspired by biological neural networks [13]. Deep learning uses this Artificial Neural Networks (ANN) to feed a machine with information and generate knowledge without human interaction. Over the last few years, Deep Learning has been a trend in AI and Machine Learning systems. It's widely used in several applications such as speech recognition, object detection, natural language processing (NLP), image classification, and many more [14].

An important asset for Deep Learning is data; a lot of data is needed to give the machine enough information to make good decisions. These algorithms use the new information to change the internal parameters in the ANN for better future performance [14].

Convolutional Neural Network

Convolutional Neural Networks (CNN) has been widely used in recent years for real-time application such as face detection [15]. This class of networks can automatically extract some features from the input data and assign relevant data, such as weight. This is called the Convolutional layer. Once the features are extracted, then the next layer processes the data in different layers to apply filters and reduce the number of parameters, this is the Pooling layers [16].

A basic example of CNN is shown in Figure 1.

Figure 1 Basic CNN structure in an image classification problem. 

Image Classification

Image recognition and classification are difficult tasks for machines [17], deep learning methods are used to process the images to get better data and perform the classification, this process could be: noise reduction, slight improvement, color correction, etc. Multiple images are needed to feed the algorithms to get better results. There are techniques to improve the training data on an algorithm; these techniques are responsible for improving the quality and quantity of the data so that the algorithms work better in different types of environments; this is called data augmentation [18].

Internet of Things

The Internet of Things (IoT) also well known as the Internet of Objects or the Internet of Everything, refers to the interconnected network of all kinds of objects, which are often equipped with data processing technology [19]. Experts estimate that by the end of the year 2025 there will be approximately 75 billion devices connected to the internet [20].

MATERIALS AND METHODS

In this project, it is fundamental to integrate several technologies in which the communication will persist and be consistent from the diffuse to the receptor, this is, from the physical machine to the digital information visualization system. A messaging protocol for sensors and mobile devices, MQTT (Message Queue Telemetry Transport) is a well-known lightweight messaging protocol for IoT systems [21], widely used to communicate and manage message transport from publishers to clients. This protocol must relate to other technologies to get all its potential. This section describes each one of these components that are part of all the systems to be a bridge on each layer from the overall framework.

An algorithm to explain each of the steps followed in this project is presented in Figure 2. Each step will be described in the following subsections.

Figure 2 Algorithm that presents the system implementation. 

Methodology

Figure 3 shows the IoT framework for this project. This framework is based on the architecture proposed and explained by the authors in [22]. In the center, it is displayed the MQTT Mosquitto Broker, which is in charge of communicating all devices and states. The clients are subscribed to the assigned topic which serves as the main communication channel. This topic contains all the user states such as the Username, ID, connection attempts, and if it is wearing the mask to grant or deny access. The message uses the JavaScript Object Notation (JSON) format, which is a lightweight data-interchange method, easy for humans to read and write and suitable for machines to process.

Figure 3 IoT Framework. 

The three MQTT clients used for this IoT framework are one Raspberry Pi device, one computer, and the mobile device with the access request application. Raspberry Pi client controls the camera and the servomotor. Turn on the camera when access is requested and send the Open signal to the servo motor as an actuator if the access is granted. The computer client gets all the MQTT server stats to store them in the database and to update it on the dashboard. The mobile device client sends the access request within the user parameters.

The camera detects in real-time when someone requires access to the area of interest. This means that if a person puts on the mask or gets it off in a very short time, it will be detected. The Raspberry Pi receives the data from the camera and constantly communicates those values to the service layer through the internet.

The camera device and the servo motor correspond to the Physical layer. The Raspberry Pi device communicates the Physical layer status as a gateway, so it corresponds to the Communication layer. All data is transferred through the internet and stored in a database, reaching the Service layer. For this case, services are implemented on a local server configured with Apache Server.

The last layer is the Visualization one. Once the data is stored and processed it must be visualized. The parameters of the access control system are displayed on the dashboard and in the mobile device application to inform the user of its status when the access is requested. This layer also displays the ID and picture of the person requiring access.

Face Mask Detection Algorithm

To detect in real-time whether the person in front of the camera is wearing a mask or not, a detector model was made. To train the model, the images were taken from the Kaggle Face Mask Detection Dataset [23]. This Dataset consists of 3725 images of people wearing a mask and 3728 images of people not wearing one. An example of an image used for each class can be seen in Figure 4. Masks with different colors, shapes, and textures were considered, to ensure that as many possible types of face masks were taken into consideration. For the current model, images of people wearing the mask incorrectly were not used.

Figure 4 Examples of the images used for the training of each class [23]

The model uses CNN and deep learning to extract and process the data to give a classification output. The CNN is designed using Keras and Tensorflow libraries from Python, and the MobileNetV2 architecture. The architecture shows an acceptable performance with low computational power [14], this makes this model suitable for embedded. Once the model was trained, it was deployed to the raspberry pi and camera to start the real-time detection, as shown in Figure 5.

Figure 5 Real-time face mask detection. On the left, the interface is displayed when someone is not wearing a mask; on the right, when someone is wearing one. 

System Modelling

Cyber-physical systems, like the one presented in this paper, can be modeled using state machines to represent their behavior. For the design of the dynamics of the access control system, MATLAB’s Simulink was used. Figure 6 shows the layout where its operation is described.

Figure 6 Access Control System modeled in Simulink. 

The state machine represented by the Simulink Stateflow manages the behavior of the system according to the inputs it receives. These inputs are the access attempt that the person sends with their username from a mobile application, the result of the mask detection model on the image captured in real-time by the came- ra, and the successful or failed connection of the MQTT server. The state machine can be seen in Figure 7.

Figure 7 State machine for access control. 

The first state is searching for an access attempt that will be made from the mobile app. Once an access attempt is detected, the system jumps to the next state which is in charge of checking the connection with the MQTT server, if the connection is successful, it goes to the other state, otherwise, it returns to the first state, and the user must retry the access until there is a successful connection; in this state, user information is sent to a da- tabase. The third state checks whether the person who wants to enter has a mask; in case of using one, the access is granted, and the door lock state is sent to be opened, in case of not having the mask, entry will be denied.

The database subsystem is receiving the user's information, store and transmit it to a dashboard designed in HTML where all the access attempts can be visualized. The door lock subsystem is responsible for controlling the servo motor or any other lock mechanism that may be selected.

Mobile Application

The Android-based mobile application runs the Image Tracking Detection developed with the Graphical Motor Unity 3D and the Vuforia SDK Engine.

Unity is a very popular video game engine to create the most sophisticated video games and a wide range of in- teractive apps for several kinds of users and industries.

Vuforia Engine is a straightforward integration software development kit (SDK), that uses the newer techniques in Computer Vision for tracking or recognizing images and objects for Augmented Reality applications [24]. It consists of controlling a camera sensor that captures the frames and passes them to the computer vision algorithms that analyze the datasets that detect and track real-world objects and compare them with the Vuforia web-based developer registered targets [25].

The Vuforia Engine SDK and Unity Engine’s advantages to track and display content on the handheld device are applied to this work.

In this project, the Vuforia Engine SDK and the Unity Engine were used to develop a tracking app to trigger a mobile User Interface where the user insert its credentials to access (Figure 8).

Figure 8 Control Access Application: a) User is connecting. b) Access denied. c) Access granted. 

The app is installed on a hand-held device which displays in a full-screen mode the user interface in which the control access connection will be done. The mobile application uses the M2Mqtt library, an MQTT client available for all .Net platforms for IoT and M2M communication. The Android-based mobile application runs the Image Tracking Detection developed with Unity and Vuforia SDK Engines.

RESULTS AND DISCUSSION

Detection Model Performance

The metrics used to evaluate the performance of the detection model are Precision, Recall, F1-score, and Accuracy. The explanation of these metrics is described below.

Precision = TruePosTruePos + FalsePos (1)

Recall = TruePosTruePos + FalseNeg (2)

F1 Score =2· Precision · RecallPrecision + Recall (3)

Accuracy=TruePos + TrueNegTruePos + TrueNeg + FalsePos + FalseNeg  (4)

In the Equations (1) to (4), True Positives (TruePos) are the images that were classified correctly as positives, in this case, people wearing masks.

Similarly, True Negatives (TrueNeg) are the images correctly classified as negatives, people not wearing masks. False Positives (FalsePos) are those cases when the image is classified as positive, but it is labeled as negative. False Negatives (FalseNeg), on the other hand, are those cases when the image is classified as a negative but is labeled as a positive.

The obtained metrics for the face mask detector model after the training are presented in Figure 9.

Figure 9 Classification Report. 

The model presents an average accuracy of 96% when classifying if a person is wearing a mask or not.

The behavior of the model after 20 epochs of training can be seen in Figure 10. As it can be seen, the training loss decreases as the model is being trained, while the accuracy of the model increases. The total training time was close to 40 minutes.

Figure 10 Training Loss and Accuracy of the model. 

The Confusion Matrix presented in Figure 11 can lead to a better understanding of the model’s results and shows where it gets confused.

Figure 11 Confusion Matrix. 

This model successfully identified 685 images of people wearing masks (91.95 % of True Positives) and 762 images of people not wearing masks (99.48% of True Negatives). Nevertheless, the model incorrectly classified 60 images of people not wearing masks, when in fact they were (8.05% False Negatives). And finally, it also incorrectly classified 4 images of people wearing a mask when they weren’t wearing them (0.52% False Positives), which is good for our proposal, as it will have a minimal error if a person wants to enter without a mask.

The ROC Curve displayed in Figure 12, shows the performance of the model when it is trying to differentiate one class from the other with the default threshold of 0.5. An ideal model will have an Area Under the Curve (AUC) of 1. This model presents an AUC of 0.96, which represents a good performance to distinguish between the 2 classes.

Figure 12 ROC Curve. 

Limitations and Future work

One limitation of the current model presented in this paper is that it was only trained with people wearing or not wearing a mask. Cases, where the person may be using the mask incorrectly, were not taken into consideration, although these cases usually classify them as not wearing masks (see Figure 5). A third class could be added with cases when the person is wearing the mask incorrectly, this would help the detector to perform better.

Another limitation of this project is the hardware of the embedded system. For this proposal, a raspberry pi 4 is used, which has certain constraints when working with real-time object detection.

Currently, the time elapsed from when the user requests access until it is recognized if the person is wearing a mask and access is granted, are 2 to 2.5 seconds. This would improve with a device with a higher GPU capacity.

The device number could be reduced by running the MQTT Mosquitto server and the Apache server in the same Raspberry Pi.

Considering that this work employs the Target Tracking technique using the Unity and Vuforia Engine’s, Augmented Reality technology can be exploited to the next level by adding useful and attractive information in the field of view of the user’s mobile device, having unlimited representations for user interfaces, videos, 3D objects, visual animations, or other developed features, like security and health information.

This system is flexible and adaptable to any area, section, room, department, or other places according to the needs of the company or institution. The door lock mechanism may be different from each control access, and it could be as simple or complex as it is required. For example, using a servomotor or using electromagnetic door locks.

Another implementation possibility is to add control parameters to the system, such as the ID of the Access Point or other health measures from the person.

Push-up notifications or alert messages can be applied to notify supervisors. The database could save the surveillance frame when a person is not using the mask and store the evidence for future references.

CONCLUSIONS

In this paper, an autonomic face mask detection system applying deep learning was proposed for controlling access to critical areas. The face mask detector showed an average accuracy of 96% when detecting if the user requesting access is wearing a mask, which can be considered good performance considering that the model was created using a CNN with the MobileNetV2 architecture for low computational devices. Through the confusion matrix, it can be seen that the model classified correctly 91.95% of the True Positives (people wearing masks), 99.48% of the True Negatives (people not wearing a mask) and got confused by 8.05% with False Negatives, but only 0.52% with False Positives, which can be interpreted as good, since the system will make fewer mistakes when given access to people not wearing a mask.

The integration and connection between all the devices are made possible thanks to the application of IoT. User access is requested with their mobile device through image tracking (with the Vuforia app), this access reaches a Mosquitto server with MQTT which is also responsible for sending this request to the embedded device (raspberry pi 4) in charge of granting the access using the camera and the face detection model. This occurs in a time between 2 to 2.5 seconds, which can be reduced if a higher graphics processing device is used.

The use of face masks is essential in times of pandemic, and measures must be taken to ensure that people who leave their homes always use one when entering public places or where there is a lot of contact with other people, conditions of high risk for the infection of COVID-19. This project shows how technologies such as the IoT, artificial intelligence, and augmented reality can be integrated to help with this problem. With this system, a healthy culture can also be educated where the use of the mask is mandatory and essential to the “new normal” life.

The access system has the potential to be installed in different areas and adapted according to the needs of the establishment. The results shown in this work revealed an efficient system to control and collect information remotely, without the need for face-toface monitoring.

A face mask detection system using artificial intelligence and powered by IoT technologies, like the one shown in this paper, has a wide application potential. Everything seems to indicate that the use of face masks will be a measure that should be adopted in different work centers and crowded places. The experience with COVID-19 should be used for the next health contingencies that could potentially occur in the following years.

REFERENCES

[1] Stilianakis NI, Drossinos Y. Dynamics of infectious disease transmission by inhalable respiratory droplets. J R Soc Interface [Internet]. 2010;7(50):1355-1366. Available from: https://doi.org/10.1098/rsif.2010.0026 [ Links ]

[2] Çelik I, Saatçi E, Eyüboğlu FO. Emerging and reemerging respiratory viral infections up to Covid-19. Turk J Med Sci [Internet]. 2020;50(SI-1):557-562. Available from: https://doi.org/10.3906/sag-2004-126 [ Links ]

[3] Lipsitch M, Cohen T, Cooper B, Robins JM, et al. Transmission Dynamics and Control of Severe Acute Respiratory Syndrome. Science [Internet]. 2003;300 (5627):1966-70. Available from: https://doi.org/10.1126/science.1086616 [ Links ]

[4] Blackburn RM, Frampton D, Smith CM, Fragaszy EB, et al. Nosocomial transmission of influenza: A retrospective crosssectional study using next generation sequencing at a hospital in England (2012-2014). Influenza Other Respir Viruses [Internet]. 2019;13(6): 556-563. Available from: https://doi.org/10.1111/irv.12679 [ Links ]

[5] Mendelson L. Facing Your Face Mask Duties - A List of Statewide Orders. Insight [Internet]; 2021. Available from: https://www.littler.com/publication-press/publication/facing-your-face-mask-duties-liststatewide-ordersLinks ]

[6] World Health Organization. Advice on the use of masks in the context of COVID-19: interim guidance-2. World Health Organization [Internet]; 2020. Available from: https://apps.who.int/iris/handle/10665/332293Links ]

[7] Fadinger H, Schymik J. The Costs and Benefits of Home Office during the Covid-19 Pandemic - Evidence from Infections and an Input-Output Model for Germany. COVID Economics [Internet]. 2020;9(1):107-134. Available from: https://doi.org/10.3886/E124902V2 [ Links ]

[8] Otoom M, Otoum N, Alzubaidi MA, Etoom Y, et al. An IoT-based framework for early identification and monitoring of COVID-19 cases. Biomed Signal Process Control [Internet]. 2020;62:102149. Available from: https://doi.org/10.1016/j.bspc.2020.102149 [ Links ]

[9] Sahlol AT, Yousri D, Ewees AA, Al-Qaness MAA, et al. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci Rep [Internet]. 2020;10(1):15364. Available from: https://doi.org/10.1038/s41598-020-71294-2 [ Links ]

[10] Khaleghi A, Moin MS. Improved anomaly detection in surveillance videos based on a deep learning method. 2018 8th Conference of AI & Robotics and 10th RoboCup Iranopen International Symposium (IRANOPEN) [Internet]. Qazvin: IEEE; 2018:73-81. Available from: http://doi.org/10.1109/RIOS.2018.8406634 [ Links ]

[11] Lee EA, Seshia SA. Introduction to Embedded Systems, A CyberPhysical Systems Approach [Internet]. Version 1.8. Berckeley: MIT Press; 2011. Available from: https://ptolemy.berkeley.edu/books/leeseshia/releases/LeeSeshia_DigitalV1_08.pdfLinks ]

[12] Lee EA. Cyber Physical Systems: Design Challenges. 2008 11th IEEE International Symposium on Object and Component-Oriented RealTime Distributed Computing (ISORC) [Internet]. Orlando: IEEE; 2008:363-369. Available from: http://doi.org/10.1109/ISORC.2008.25 [ Links ]

[13] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature [Internet]. 2015;521(7553):436-444. Available from: https://doi.org/10.1038/nature14539 [ Links ]

[14] Jin KH, McCann MT, Froustey E, Unser M. Deep Convolutional Neural Network for Inverse Problems in Imaging. IEEE Trans Image Process [Internet]. 2017;26(9):4509-4522. Available from: http://doi.org/10.1109/TIP.2017.2713099 [ Links ]

[15] Li H, Lin Z, Shen X, Brandt J, et al. A convolutional neural network cascade for face detection. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) [Internet]. Boston: IEEE; 2015:5325-5334. Available from: http://doi.org/10.1109/CVPR.2015.7299170 [ Links ]

[16] Zhu W, Ma Y, Zhou Y, Benton M, et al. Deep Learning Based Soft Sensor and Its Application on a Pyrolysis Reactor for Compositions Predictions of Gas Phase Components. Comput Aided Chem Eng [Internet]. 2018;44(1):2245-2250. Available from: http://doi.org/10.1016/b978-0-444-64241-7.50369-4 [ Links ]

[17] Chan T, Jia K, Gao S, Lu J, et al. PCANet: A Simple Deep Learning Baseline for Image Classification? IEEE Trans Image Process [Internet]. 2015;24(12):5017-5032. Available from: http://doi.org/10.1109/TIP.2015.2475625 [ Links ]

[18] Shorten C, Khoshgoftaar T. A survey on Image Data Augmentation for Deep Learning. J Big Data [Internet]. 2019;6:60. Available from: https://doi.org/10.1186/s40537-019-0197-0 [ Links ]

[19] Escamilla-Ambrosio PJ, Rodríguez-Mota A, Aguirre-Anaya E, Acosta-Bermejo R, et al. Distributing Computing in the Internet of Things: Cloud, Fog and Edge Computing Overview. In Maldonado Y, Trujillo L, Schütze O, Riccardi A, et al (eds). Studies in Computational Intelligence [Internet]. Vol 731. Cham: Springer; 2018. Available from: http://doi.org/10.1007/978-3-319-64063-1_4 [ Links ]

[20] Al-Bahri M, Yankovsky A, Borodin A, Kirichek R. Testbed for Identify IoT-Devices Based on Digital Object Architecture. In: Galinina O, Andreev S, Balandin S, Koucheryavy Y (eds). Internet of Things, Smart Spaces, and Next Generation Networks and Systems. NEW2AN 2018, ruSMART 2018 [Internet]. Vol. 11118. Cham: Springer . 2018. Available from: https://doi.org/10.1007/978-3-030-01168-0_12 [ Links ]

[21] Yassein MB, Shatnawi MQ, Aljwarneh S, Al-Hatmi R. Internet of Things: Survey and open issues of MQTT protocol. 2017 International Conference on Engineering & MIS (ICEMIS) [Internet]. Monastir: IEEE; 2017:1-6. Available from: https://doi.org/10.1109/ICEMIS.2017.8273112 [ Links ]

[22] Pacheco J, Tunc C, Satam P, Hariri S. Secure and Resilient Cloud Services for Enhanced Living Environments. IEEE Cloud Comput [Internet]. 2016;3(6):44-52. Available from: https://doi.org/10.1109/MCC.2016.129 [ Links ]

[23] Gurav O. Face Mask Detection Dataset. Kaggle [Internet]; 2021. Available from: https://www.kaggle.com/omkargurav/face-maskdatasetLinks ]

[24] Getting Started with Vuforia Engine in Unity | VuforiaLibrary. PTC Inc [Internet]. [cited 2021 May 24]. Available from: Available from: https://library.vuforia.com/articles/Training/getting-started-with-vuforia-in-unity.htmlLinks ]

[25] Amin D, Govilkar S. Comparative Study of Augmented Reality SDK’s. Int J Comput Sci Appl [Internet]. 2015;5(1):11-26. Available from: https://doi.org/10.5121/ijcsa.2015.5102 [ Links ]

Received: March 29, 2021; Accepted: July 21, 2021

Corresponding author TO: Victor Hugo Benitez Baltazar INSTITUTION: Universidad de Sonora ADDRESS: Blvd. Luis Encinas y Ave. Rosales S/N, Col. Centro, C. P. 83000, Hermosillo, Sonora, México E-MAIL: victor.benitez@unison.mx

AUTHOR CONTRIBUTIONS

V. H. B. oversaw and administrated the project and obtained funding. J. P. oversaw and administrated the project. R. M. R. and C. N. realized the data curation, validation, the study of the state of the art, and prepared the original draft. R. M. R. was responsible for software implementation and data visualization. All authors methodology implementation and design, participated equally in the conception and design of the project, carried out the analysis, writing and reviewing draft.

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License