SciELO - Scientific Electronic Library Online

 
vol.27 número3Spatiotemporal Bandits Crime Prediction from Web News Archives AnalysisAprendizaje automático para predicción de anemia en niños menores de 5 años mediante el análisis de su estado de nutrición usando minería de datos índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Comp. y Sist. vol.27 no.3 Ciudad de México jul./sep. 2023  Epub 17-Nov-2023

https://doi.org/10.13053/cys-27-3-4301 

Articles

Carbon/Nitrogen Ratio Estimation for Urban Organic Waste Using Convolutional Neural Networks

Andrea de Anda-Trasviña1 

Alejandra Nieto-Garibay1 

Fernando D. Von Borstel1 

Enrique Troyo-Diéguez1 

José Luis García-Hernández2 

Joaquín Gutiérrez1  * 

11 Centro de Investigaciones Biológicas del Noroeste, La Paz, Mexico. adeanda@pg.cibnor.mx, anieto04@cibnor.mx, fborstel@cibnor.mx, etroyo04@cibnor.mx.

22 Universidad Juárez del Estado de Durango, Gómez Palacio, Mexico. luis_garher@hotmail.com.


Abstract:

In this paper, the Carbon/Nitrogen ratio was estimated by classifying the urban organic waste (UOW) based on qualitative (color and maturity) and quantitative (weight) characteristics via convolutional neural networks (CNN) and image processing. The reuse of UOW is a suitable process in waste management, preventing its disposition in landfills and reducing the effects on the environment and human health. Ambient conditions affect the UOW characteristics over time. Knowing these changes is essential to reuse them appropriately, mainly both carbon and nitrogen content. A categorization associated with the decomposition stage of the UOW was proposed, which becomes the corresponding UOW classes. Three convolutional neural network models were trained with UOW images. Two pre-trained CNN (MobileNet and VGG16) were trained by transfer learning technique, and one proposed model (UOWNet) was trained from scratch. The UOWNet model presented a good agreement for the classification task. The results show that this preprocess is a practical tool for assessing the Carbon/Nitrogen ratio of UOW from its qualitative and quantitative features through image analysis. It is a preliminary framework aimed to support household organic waste recycling and community sustainability.

Keywords: Fruit waste; Carbon/Nitrogen ratio; composting; convolutional neural network; image processing

1 Introduction

An estimated 1,957 million tons of fruits and vegetables were produced worldwide in 2018, harvesting mainly bananas, citrus fruits, apples, melons, grapes, tomatoes, various alliums, brassica, and cucumbers [1]. The inedible fraction of fruits and vegetables, known as urban organic waste (UOW) [2], is usually dumped in landfills, which generates undesirable impacts on the environment and the health of the population [3].

An alternative is the reusing UOWs practice known as the composting process, which is the aerobic decomposition of organic matter under appropriate conditions [4].

Decomposition and humification (oxidative biological transformation) of organic matter is a consequence of microbial activity through complex metabolic processes [5]. The final product of the composting process is compost. That is a nutritious, homogeneous, stable, and mature material. The compost involves pathogen-free nutrients that benefit the soil and plants, minimizing pollution [6].

Compost and other alternative technologies can help address the global fertilizer crisis [7], compensating for nutrient deficiencies in the soil to increase crop yields. The composting process involves several parameters, including humidity, aeration, pH, temperature, and carbon and nitrogen (C/N) ratio. These depend on the compostable materials; for instance, the organic waste proportion determines the C/N ratio content [7]. The initial C/N ratio is considered one of the key factors affecting the composting process and compost quality [8]. Carbon is the energy source for microorganisms, nitrogen contributes to the organic matter degradation, and both serve the protein synthesis and microbial growth [9]. Nitrogen and carbon losses occur because of the metabolic activities of microorganisms during the composting process [10,11].

Therefore, a C/N ratio is set within 25-40 at the composting start to finish around 10-20 [6]. Green and fresh organic materials contain more nitrogen, whereas brown and dry matters comprise more carbon than other materials; in practical terms, the compostable matter color is a good indicator of the carbon and nitrogen content [9].

Although composting is a well-known process for recycling organic waste, the practical utilization of UOWs still offers various challenges [12]. Besides the heterogeneity of UOWs, their qualitative characteristics, such as the stages of decomposition, which change color over time, and affect their quantitative characteristics (pH, humidity, carbon, and nitrogen content), are aspects scarcely reported. Commonly, different laboratory methods evaluate the C/N ratio, requiring reagents and specialized equipment, among other expensive resources that could be unavailable.

Practical estimation of the C/N ratio of UOWs is one of the keys to fulfilling local demands for better management of residues. In addition, it helps compost producers to obtain products that maintain the same properties, regardless of the organic waste used. In this context, the present work estimates the C/N ratio based on qualitative (color and maturity stage) and quantitative (weight) characteristics of the UOWs through the classification of images using Convolutional neural networks (CNN).

This proposed preprocess is a preliminary systematic framework to support compost production at household and tiny community levels, recycling the organic waste while conserving resources as a sustainability practice. For this purpose, a dataset of individual images of UOWs was generated as ground truth, associating each of them with the weight and decomposition stage of the UOW.

Determining the C/N ratio for five UOW in three decomposition stages was performed through a systematic literature search and complemented with laboratory analyses. Data of C/N ratio was arranged in a lookup table. For the automated UOW classification, three CNNs architecture models were trained.

The experiments consisted of training the models to classify the UOWs in their three stages of decomposition, which obtained appropriate accuracy. In addition, linear regression models were generated to predict the weight of the UOWs based on the variables of weight and the number of pixels in their images. The classification and weight of each UOW are related to the UOW C/N lookup table for estimating its C/N ratio.

2 Classification Algorithms

Digital image processing allows the interpretation of images and autonomous machine perception, including image storage, transmission, and representation [13]. Object detection and classification support a wide range of applications, such as security, robot vision, and consumer electronics [14].

In this context, deep learning techniques have emerged as a powerful strategy for learning feature representations directly from data, leading to advancements in detection and classification tasks [15]. Deep learning algorithms composed of multiple layers of processing detect features at different abstraction levels from data [16].

CNNs are deep learning algorithms based on artificial neural networks for classification and pattern recognition tasks, identifying the geometric similarity of the object shapes [17].

Automatic food analysis has been an application field of these algorithms, developing systems to locate and recognize various foods. Also, to estimate their quantity for tracking user diets, maintaining a healthy weight, and monitoring food-related health problems such as obesity and diabetes. A dataset of 20 different food categories was assessed with four CNNs combined with Support Vector Machines (SVM) in the task of recognizing food categories and states [18]. Deep architectures [19] were used for simultaneous learning of ingredient recognition and food categorization corresponding to images of given dishes for retrieving recipes.

Then deep learning features and semantic ingredient labels are applied to effortlessly recall recipes. CNNs have also been applied for identification in videos [20]. Objects and ingredients are explored in cooking videos and the most frequent objects are analyzed through a deep model based on Resnet CNN as a solution to the stage identification problem [21].

Another deep learning approach based on CNN Inception is proposed to identify different cooking states from images. The effect was analyzed in terms of different parameters such as batch size, optimizer, and frozen convolutional layers [22].

The automatic classification of fruit can assist in supermarket pricing, packaging, transportation, harvesting, and mapping on farms. Initial studies used near-infrared imaging devices, a gas sensor, and high-performance liquid chromatography to scan the fruit [23, 24]. Recent works implement deep learning techniques for fruits and vegetable classification using or modifying some previously trained CNN with a large data set.

When there is a small new dataset, the suggested technique relies on data augmentation, which consists of applying transformations to data [25]. In the case of images, the data augmentation applies geometric (zoom, flip, rotation, and cropping) or photometric (color fluctuation and edge enhancement) transformations [26].

A previous study created a 15-class fruit image dataset with a data augmentation technique and proposed a five-layer CNN [27]. To improve the performance of the proposed model, early stopping, and variable hyper-parameters were used. Some works have used eight and thirteen layers and kernels of 3×3, 5×5, and 7×7 [23, 28].

Several works have applied CNN for the classification of waste using images. For instance, a multi-layer deep learning system classifies waste into five recyclable categories: paper, plastic, metal, glass, and others (fruits, vegetables, plants, and kitchen waste) in urban public areas [29]. An automatic system using the CNN algorithm separates waste into biodegradable and non-biodegradable [30]. A CNN mobile application classifies waste into three categories: trash, recyclable, and compostable [31]. However, none of these works classifies fruit or vegetable wastes.

This work proposes the classification of UOWs using CNNs, focusing on the inedible fraction of fruits and vegetables (peels and cores). The above facilitates estimating the C/N ratio of different UOWs and establishing the appropriate proportions before starting the composting process. Another contribution from this project is a database of UOW images since there is no such database available.

This document contributes to the United Nations Sustainable Development Goals for Latin America by promoting the reuse of UOW which makes up approximately 50% of the waste generated in cities [3] through the household composting process.

Hence, it pretends to avoid the impact of the final disposal of UOWs on landfills, which propitiate the generation of toxic and greenhouse gases, water and soil contamination, and landscape degradation in the surrounding population.

Household composting releases five times less ammonia, methane, and nitrous oxide than those industrial which consumes up to 53 times more resources (transport, energy, water, and infrastructures) and produces volatile organic compound emissions [32].

Some commercial devices compost household waste by mixing it while maintaining humidity and temperature. However, the resulting product is deficiently stable to be applied immediately afterward in gardening or agricultural activities. Although they are compact devices, their price is like a large appliance [33].

Besides, the application of machine learning techniques provides automatic classification of UOWs before passing through the composting process. That allows enhance the management of the composting process and estimate the final product quality.

It would help to avoid energy consumption due to the use of electronic equipment and laboratory reagents Furthermore, the design purpose of a CNN model from scratch is to have a compact model, which is expected to be embedded in a low-power processing board or a mobile device such as a smartphone.

3 Methodology

The system for estimation of the C/N ratio of UOWs is composed of two parts: 1) an index database of selected UOWs for this study, 2) an Automatic C/N ratio estimator for UOWs (Fig. 1). An index is a numerical correlation of a particular UOW in a digital image with its qualitative (color and stage of decomposition) and quantitative (weight and C/N) characteristics.

Fig. 1 Procedure for estimation of C/N ratio of UOWs 

3.1 UOW Index Database

The creation of the UOWs index database involved a series of activities that are described in the following sections.

3.1.1 UOWs Selection and Collection

A detailed query allowed the identification of the most statistically consumed fruits and vegetables in Mexico [34] to select and collect their inedible fraction as UOWs [7,–37].

3.1.2 UOW Images Capture

Digital images were captured under controlled conditions of the collected fruits and vegetables UOWs, using a systematic capturing algorithm developed in the PythonTM programming language [38] with the OpenCV (Open source Computer Vision) library [39]. This algorithm tagged and stored the images automatically through a Logitech c920 webcam located overhead plane at 0.30 m and a white light source from a 5 W LED lamp.

The UOWs were laid in an extended pose on a flat and blue background to create contrast with their natural colors, minimizing occlusion and allowing the separation of the UOWs from this background by creating corresponding filters, since the blue color is one of the three channels in the RGB color space. Images dimensions are 960 × 720 pixels in JPG format (Joint Photographic Experts Group).

3.1.3 Weight Measurement

After capturing the image, the weighed of each UOW, expressed in grams (g), was measured by a digital balance, trademark Queen Sense with a resolution of 0.01 g and a range from 0.01 to 500 g, and recorded in the index database.

3.1.4 Class Assignation

In this study, a class is the type of UOW and a stage of decomposition. The decomposition stage of the UOWs was categorized as initial, middle, and advanced, mapping the UOWs value to a class specified by predefined thresholds. Additionally, a color scale is related to each UOW class based on the Von Loesecke [40] maturity and color scale of postharvest bananas.

3.1.5 Decomposition Stage Determination

The UOWs are separated from the blue background of each captured image to assess their stage of decomposition based on the oxidation process, which is shown mainly with dark brown and black characteristic colors, although gray and white colors could be present in organic wastes due to fungi colonization [41]. The UOW decomposition stage (UOWds) was calculated by:

UOWds=100dp/tp, (1)

where tp is the total pixels of each separated UOW and dp refers to the number of dark pixels comprised in it, using the L channel of the CIELab color space that represents the luminosity intensity on a scale of 0 to 100 (0 corresponds to black and 100 to white) [42]. The procedure was implemented using Matlab© functions.

3.1.6 C/N Ratio Determination

A systematic search of the C/N ratio content for the selected OUWs followed the methodology based on criteria for review, selection, and evaluation of scientific material published on a study subject [43]. The systematic search was complemented with data obtained from laboratory analysis for the estimation of carbon and nitrogen content in UOWs.

The laboratory analysis was performed at Centro de Investigaciones Biológicas del Noroeste, S. C. (CIBNOR), in La Paz city, B.C.S. state, México, implementing the Dumas method to estimated nitrogen in a Leco FP-528 equipment.

The carbon content (%C) was estimated by Eqs. 2-3. First, the UOW samples were weighed, dried, and homogenized. The ash content (%Ash) was estimated by weight difference when incinerating the dried UOW at 600 °C/5 h (A.O.A.C. 2002 Method). The content of volatile solids (%VS) was estimated:

%VS=100%Ash, (2)

%C=%VS1.8. (3)

One-way ANOVA analysis was applied to the results obtained using the R programming language [44]. The p-value was calculated and compared with the value of α-level (0.05) to establish if the differences between groups (decomposition stage) are statistically significant.

3.2 Automatic C/N Ratio Estimator for UOWs

The automatic estimation of the UOW C/N ratio from a digital image consisted of three steps: 1) UOW images classification, 2) UOWs weight estimation, and 3) C/N ratio estimation for UOWs.

3.2.1 UOW Images Classification

The classification of a UOW with its stage of decomposition was based on a CNN approach, adapting and proposing CNN algorithms. The set of UOW images was labeled into the classes corresponding to the UOW established decay stages, distributing each one into three image subsets for the CNN training, validation, and testing [45].

Then, the process executed data augmentation techniques to the dataset, applying random geometric transformations (flipping, rotation, translation, and zoom) to the images and obtaining new UOW image poses, but without generating data outside of reality [25]. The learning transfer strategy was applied to the pre-trained CNNs [17]. The CNN learning process was executed in the Google Colaboratory environment [46].

3.2.2 UOWs Weight Estimation

The weight data in the database serve to estimate the weight of a new UOW image through linear regression. The linear regression, generated in the R programming language [44], explains the relation between the UOW pixel number in the image and the UOW weight in grams.

3.2.3 C/N Ratio Estimation for UOWs

Each image was associated with a value of the C/N ratio according to its class, understanding class as the type of UOW and stage of decomposition.

4 Experimental Results

4.1 UOW Index Database

The UOWs index database includes the quantitative (weight, C/N ratio) and qualitative (color and maturity stage) characteristics of selected UOWs associated with an image.

4.1.1 UOWs Selection and Collection

According to the selection analysis, five fruits and vegetables with the highest consumption in Mexico were selected: banana (Musa paradisiaca), apple (Malus domestica), orange (Citrus sinesis), lemon (C. aurantifolia), and potato (Solanum tuberosum). Table 1 shows the annual consumption per capita in México of the selected fruits and vegetables, where their inedible fraction (waste) are mainly peels.

Table 1 Annual consumption and the inedible fraction of fruit and vegetables selected for the study 

UOW Annual consumption per capita (kg) a Inedible fraction b Fraction (%) b
Apple 8 Peel, core 12
Banana 14.4 Peel 35
Orange 37.2 Peel, seeds 30
Lemon 15.1 Peel, seeds 34
Potato 15.1 Peel 16

a [34], b [7,35–37].

4.1.2 UOW Images Capture

7,500 RGB images of the selected fruit and vegetable categories compose the UOW dataset; 1,500 core and peel images of apple, banana, lemon, orange, and potato, respectively, balancing equally the image datasets for each decomposition stage. Fig. 2 shows the system for capturing UOW images, and Fig. 3 depicts an image set with UOWs in an extended pose, particularly the banana peels, on the blue background.

Fig. 2 UOW image capture system 

Fig. 3 UOW samples: a) apple core, b) banana peel, c) lemon peel, d) orange peel, and e) potato peel 

4.1.3 Weight Measurement

The UOW weight was associated with each captured image (Fig. 4).

Fig. 4 UOW weight measurement 

4.1.4 Class Assignation

Class assignment resulted in 15 classes, each type of UOW (apple, banana, lemon, orange, and potato) associated with a decomposition stage (initial, middle, and advanced). An initial stage UOW has approximately one day of being collected, a middle stage UOW has at most seven days, and an advanced stage UOW has more than seven days.

4.1.5 Decomposition Stage Determination

The L channel (luminosity intensity) of the CIELab color space was chosen to estimate the decomposition stage of a UOW in an image applying Equation (1). Decomposition stages thresholds were established on the L channel to count the dark pixels (dp), setting UOWds < 30 for the initial stage, 30 < UOWds < 50 for the middle stage, and 50 < UOWds for the advanced stage. Fig. 5 shows the original image converted in the L channel.

Fig. 5 Banana peel stage of decomposition based on L channel of the Lab color space: a) initial, b) middle and c) advanced 

Finally, the image histogram was generated in the L channel, and the thresholds were applied to determine the composition stages. The decomposition stage algorithm allowed categorizing a UOW image dataset into the 15 classes, with accuracies from 52-55 %.

Thus, this process was visually supervised to corroborate the correct estimation of the decomposition stages of the UOWs shown in the images to obtain an accurately labeled dataset, using a color scale related to each UOW category in three stages of decomposition: initial, middle, and advanced (Table 2).

Table 2 Color and stage of decomposition by UOW class 

UOW Initial Middle Advanced
Apple Intense red Opaque red Brown
Banana Intense yellow Opaque yellow Brown
Orange Intense orange Opaque orange Brown
Lemon Intense Green/yellow Opaque Green/yellow Brown
Potato Light brown Brown Dark brown

4.1.6 C/N Ratio Determination

The C/N ratio systematic search of UOWs obtained 2988 articles with the search engines proposed: 971 full documents and 2017 abstracts. After eliminating duplicates, and applying the exclusion criteria, 20 papers were found that present the C/N ratio of selected UOW without indicating the stage of decomposition. This search leads to two observations. First, it is evident the lack of C/N data for UOWs, and second, the UOW decomposition stages are unrevealed data.

The one-way analysis of variance of the results of laboratory analyses suggests that there are significant differences in the C/N ratio of the UOWs in all stages, except for the case of orange in the middle and advanced stages. Table 3 shows the C/N ratio obtained by laboratory analysis for the three decomposition stages (initial, middle, and advanced) and the result of the systematic search.

Table 3 UOWs C/N Ratio. Laboratory analysis and systematic search 

UOW Laboratory analysis Systematic search*
Initial Middle Advanced
Apple 106.06 78.72 88.65 48
Banana 56.45 39.79 43.43 34.34
Lemon 40.46 35.79 37.21 28.38
Orange 54.50 66.28 66.30 48.18
Potato 21.56 19.30 17.33 14.94

*[47,48,57–62,49–56].

4.2 Automatic Estimator of the UOWs C/N Ratio

After determining the decomposition stage of the UOWs in the images, a detailed image collection was made. The purpose was to have representative and balanced UOW images in the 15 classes with 500 images per class.

Then, each classes were divided into percentages of 70 (350 images), 20 (100 images), and 10 (50 images) for training, validation, and test respectively using Split-Folder© 0.4.3 library [63]. The UOW images were split randomly into train, validation, and test datasets (folders) for each class according to the programmed ratios.

4.2.1 UOW Images Classification

Three CNN architecture models were selected to train and compare their performance for UOW classification: 1) UOWNet, an own proposed CNN in this study based on the LeNet-5 network blocks, with five convolutional layers, one fully connected layer, and small kernels (3×3). 2) MobileNet developed by Google [64] applies 1×1 and 3×3 kernels in layers. 3) VGG16 proposed by Oxford University [65] consists of a stacked set of convolutional layers, with small kernels (3 x 3) [66].

The lack of a large amount of UOW data and the high computational cost to train CNNs from scratch encouraged the adaptation of the MobileNet and VGG16 pre-trained networks for classification tasks by transfer learning technique.

The same hyperparameters were set on each CNNs, including the loss function: categorical cross-entropy and the optimization function: Adam. A learning rate of 0.0001 with 100 epochs and a value for dropout of 0.2 were set, allowing the model to randomly remove some nodes during training and reduce overfitting [67]. Image dimensions were 224 × 224 × 3 (width, height, channels) with a batch size of 32.

Table 4 shows the resulting performance of the models for the UOW classification. For instance, the model completed for the apple core achieved an average accuracy of 85.62%. Fig. 6 shows the loss and accuracy graphs resulting from the training and validation phases. Fig. 7 shows the confusion matrices obtained in the testing phase of the three CNN models.

Table 4 Accuracy for the pre-trained CNN and OUWNet 

UOW MobileNet VGG16 UOWNet Average
Apple 85.24% 82.10% 89.52% 85.62%
Banana 90.67% 82.48% 86.76% 86.64%
Lemon 68.57% 59.52% 73.23% 67.11%
Orange 80.86% 72.95% 82.48% 78.76%
Potato 82.66% 73.04% 86.47% 80.72%
Model Average 81.60% 74.02% 83.69%

Fig. 6 Accuracy curves: MobileNet V2, VGG16, and UOWNet 

Fig. 7 Confusion matrixes obtained in the testing phase for the CNN models: Advanced, Middle, and Initial 

Normalized confusion matrices (%) are generated for the test data and a better ranking is presented for MobileNet and UOWNet models.

Banana peel reflected an average accuracy of 86.64%, and the confusion matrices show a better performance for the MobileNet and UOWNet models.

Lemon peel presented the lowest average accuracy (67.11%) with the low percentages in the test classification. The models present accuracies of 78.76% and 80.72% for orange and potato peels, respectively.

These results also showed that the MobileNet and UOWNet models present higher accuracy than the VGG16 model. This could be explained because the VGG16 is complex for the small dataset size of this project for a large number of trainable parameters and convolutional layers, making it less efficient [67].

In the case of apple, lemon, and orange, this model tends overfitting. In contrast, the MobileNet model presents underfitting due to the MobileNet model was developed for mobile device vision applications, which reduces its accuracy [64].Fig. 8 depicts the CNN model performance considering the Kappa statistics (κ) ranging from 0 to 1.

Fig. 8 Kappa statistics for CNN models. UOWNet presented a good agreement for all classes 

This metric measures the reliability between the ground truth po (for labeled data that are the values of each UOW) and the results of the UOW CNN classifiers that are expected by chance alone pe [68], expressed by:

κ=pope1pe. (4)

The Kappa values are associated with an agreement label [69] between computed classification (model results) and expected classification (by chance prediction) for the UOW classifiers: very bad (<0.00), bad (0.00-0.20), regular (0.21-0.40), moderate (0.41-0.60), good (0.61-0.80) and very good (0.81-1.00).

According to the Kappa statistic, the UOWNet models presented a good agreement, whereas the MobileNet and VGG16 models performed a moderate agreement for the lemon class.

Increasing the size of the UOW dataset and adjusting the hyperparameters could improve the classification accuracy for any of three CNNs applied [31].

4.2.2 UOWs Weight Estimation

The weight estimation for each UOW class was obtained using the linear regression method. Five different models were generated to explain a linear relationship between variables: the number of pixels and the associated weight (in grams). Equation (5) describes this relation:

y^=α+βx, (5)

where y^ is the expected value of the weight and x is the number of pixels. Fig. 9 shows the resulting scatter plots for each UOW class. The linear equation and the coefficient of estimation (R2) for each model are also shown.

Fig. 9 Linear model for weight and pixels 

The results of weight estimation revealed that the best model explaining the variation of data were the apple and potato models (R2 ≥ 90). This is due to the apple core and potato peel having a consistent shape and few variations in the number of pixels for any of the three stages of decomposition.

The weight estimation in some images is limited to the pose of the UOW, resulting in a lower account of pixels. Such is the case for banana peel; the model obtained explains 72% of the data variation (R2 = 72).

When capturing the image, placing the UOWs extended fully would increase the explanatory percentage between the variables weight and the number of pixels for all types of UOWs. Another variable, such as the decomposition stage, could be considered to increase the correlation.

The result in the linear models of the orange and lemon classes (R2 = 62 and R2 = 63, respectively), can be explained by the fact that some orange and lemon wastes still contained juice, which means that the weight may be greater than the weight estimated by the linear model approach.

4.2.3 C/N Ratio Estimation for UOWs

The proposed framework identifies the class of each UOW (type and decomposition state), assesses its C/N ratio from the laboratory analysis C/N result (Table 3), and calculates its weight by the corresponding linear model to estimate the resulting C/N ratio of the mixture of UOWs.

Fig. 10 depicts an exercise for UOWs classification, weight estimation, and C/N ratio query. Once the relative weight (Wi) and C/N ratios (CNRi) of the individual UOWs to be composted are obtained, Eq. 6 can be used to calculate the ratio of the mix as a whole (CNRT), which is expected to be in the range 25-40 at the beginning of the composting process:

CNRT=(W1+CNR1)+(W2CNR2)+ (6)

Fig. 10 C/N estimation for UOWs 

5 Conclusion

The proposed CNN-based method to estimate the C/N ratio of UOWs through images is a practical and appropriate preprocess as an immediate alternative and less expensive than laboratory analyses.

The transfer learning strategy allowed the training of the pre-trained CNNs with new image categories: apple core and banana, lemon, orange, and potato peels.

The UOWNet model showed a significant performance in the testing phase compared to the MobileNet and VGG16 models, suggesting that increasing the layer number, adapting the hyperparameters, and increasing the dataset, may obtain better performance for UOW classification. The latter also would imply a lower computational cost compared to the other CNN models.

The paper contribution includes a UOW classification based on the decomposition stage and a database of UOW indices correlating qualitative and quantitative characteristics.

In the future, more experiments will be necessary for improving the image set, also testing other filter sizes to improve the overall performance of the proposed model to classify the decomposition stages of UOWs.

Further work will generate a trained CNN model to classify the UOW using embedded and mobile devices for fieldwork.

Acknowledgments

The author A. de A.-T. would like to thank the National Council of Science and Technology of Mexico (CONACYT, scholarship 781533) and CIBNOR.

References

1. FAO (2020). Frutas y verduras – esenciales en tu dieta [Internet]. Año Internacional de las Frutas y Verduras, 2021. Documento de antecedentes. 2020 [cited 2021 Apr 16]. Available from: https://doi.org/10.4060/cb2395es. [ Links ]

2. Colón, J., Martínez-Blanco, J., Gabarrell, X., Artola, A., Sánchez, A., Rieradevall, J., Font, X. (2010). Environmental assessment of home composting. Resources Conservation Recycling, Vol. 54, No. 11, pp. 893–904. DOI: 10.1016/j.resconrec.2010.01.008. [ Links ]

3. Kaza, S., Yao, L., Bhada-Tata, P., Van Woerden, F. (2018). What a Waste 2.0: A Global Snapshot of Solid Waste Management to 2050. World Bank Publications. [ Links ]

4. Román, P., Martínez, M. M., Pantoja, A. (2013). Manual de compostaje del agricultor. Experiencias en América Latina. Organización de las Naciones Unidas para la Alimentación y la Agricultura. [ Links ]

5. Sayara, T., Basheer-Salimia, R., Hawamde, F., Sánchez, A. (2020). Recycling of organic wastes through composting: process performance and compost application in agriculture. Agronomy, Vol. 10, No. 11, pp. 1838. DOI: 10.3390/agronomy10111838. [ Links ]

6. Bueno-Márquez, P., Díaz-Blanco, M. J., Cabrera-Capitan, F. (2008). Factores que afectan al proceso de composaje. In: Moreno J, Moral R, editors. Compostaje, Madrid, España: Mundi-Prensa, pp. 95–109. [ Links ]

7. Ghinea, C., Leahu, A. (2020). Monitoring of fruit and vegetable waste composting process: Relationship between microorganisms and physico-chemical parameters. Processes, Vol. 8, No. 3, pp. 302. DOI: 10.3390/pr8030302. [ Links ]

8. Kumar, M., Ou, Y. L., Lin, J. G. (2010). Co-composting of green waste and food waste at low C/N ratio. Waste Management, Vol. 30, No. 4, pp. 602–609. DOI: 10.1016/j.wasman.2009.11.023. [ Links ]

9. Centro de Investigaciones Biológicas del Noreste S.C. (2021). La composta. Importancia, Elaboración y uso agrícola. 1st ed. México, D.F.: Trillas; pp. 88. [ Links ]

10. Sánchez-Monedero, M. A., Roig, A., Paredes, C., Bernal, M. P. (2001). Nitrogen transformation during organic waste composting by the Rutgers system and its effects on pH, EC and maturity of the composting mixtures. Bioresour Technol, Vol. 78, No. 3, pp. 301–308. DOI: 10.1016/S0960-8524(01)00031-1. [ Links ]

11. Zhang, W. M., Yu, C. X., Wang, X. J., Hai, L. (2020). Increased abundance of nitrogen transforming bacteria by higher C/N ratio reduces the total losses of N and C in chicken manure and corn stover mix composting. Bioresource Technology, Vol. 297, pp. 122410. DOI: 10.1016/j.biortech.2019.122410. [ Links ]

12. Li, Z., Lu, H., Ren, L., He, L. (2013). Experimental and modeling approaches for food waste composting: A review. Chemosphere, Vol. 93, No. 7, pp. 1247–1257. DOI: 10.1016/j.chemosphere.2013.06.064. [ Links ]

13. González, R. C., Woods, R. E. (2018). Digital image processing. Pearson, pp. 1168. [ Links ]

14. García-Villanueva, M., Romero-Muñoz, L. (2020). Diseño de una arquitectura de red neuronal convolucional para la clasificación de objetos. Ciencia Nicolaita, No. 81, pp. 46–61. DOI: 10.35830/cn.vi81.517. [ Links ]

15. Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., Pietikäinen, M. (2020). Deep learning for generic object detection: A survey. International Journal of Computer Vision, Vol. 128, No. 2, pp. 261–318. DOI: 10.1007/s11263-019-01247-4. [ Links ]

16. Lecun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature, Vol. 521, No. 7553, pp. 436–44. DOI: 10.1038/nature14539. [ Links ]

17. Aggarwal, C. C. (2018). Neural networks and deep learning. Springer International Publishing, pp. 512. DOI: 10.1007/978-3-319-94463-0. [ Links ]

18. Ciocca, G., Micali, G., Napoletano, P. (2020). State recognition of food images using deep features. IEEE Access, Vol. 8, pp. 32003–32017. DOI: 10.1109/ACCESS.2020.2973704. [ Links ]

19. Chen, J., Ngo, C. W. (2016). Deep-based ingredient recognition for cooking recipe retrieval. Proceedings of the 24th ACM international conference on Multimedia, MM’16: ACM Multimedia Conference, pp. 32–41. DOI: 10.1145/2964284.2964315. [ Links ]

20. Silva, D., Manzo-Martínez, A., Gaxiola, F., González-Gurrola, L., Ramírez-Alonso, G. (2022). Analysis of CNN architectures for human action recognition in video. Computación y Sistemas, Vol. 26, No. 2, pp. 623–641. DOI: 10.13053/cys-26-2-4245. [ Links ]

21. Jelodar, A. B., Salekin, M. S., Sun, Y. (2018). Identifying object states in cooking-related images. DOI: 10.48550/arXiv.1805.06956. [ Links ]

22. Salekin, M. S., Babaeian-Jelodar, A., Kushol, R. (2019). Cooking state recognition from images using inception architecture. International Conference on Robotics, Electrical and Signal Processing Techniques, pp. 163–168. DOI: 10.1109/ICREST.2019.8644262. [ Links ]

23. Wang, S. H., Chen, Y. (2018). Fruit category classification via an eight-layer convolutional neural network with parametric rectified linear unit and dropout technique. Multimedia Tools and Applications, Vol. 79, No. 21–22, pp. 15117–15133. DOI: 10.1007/s11042-018-6661-6. [ Links ]

24. Sakib, S., Ashrafi, Z., Siddique, A. B. (2019). Implementation of fruits recognition classifier using convolutional neural network algorithm for observation of accuracies for various hidden layers. pp. 10–4. DOI: 10.13140/RG.2.2.31636.14723. [ Links ]

25. Torres, J. (2019). Deep learning: Introducción práctica con Keras. WHAT THIS SPACE; pp. 176. [ Links ]

26. Hridayami, P., Darma-Putra, I. K. G., Suard-Wibawa, K. S. (2019). Fish species recognition using VGG16 deep convolutional neural network. Korean Institute of Information Scientists and Engineers, Vol. 13, No. 3, pp. 124–30. DOI: 10.5626/JCSE.2019.13.3.124. [ Links ]

27. Hussain, I., He, Q., Chen, Z. (2018). Automatic fruit recognition based on DCNN for commercial source trace system. International Journal on Computational Science, Vol. 8, Vol. 2-3, pp. 1–14. DOI: 10.5121/ijcsa.2018.8301. [ Links ]

28. Zhang, Y. D., Dong, Z., Chen, X., Jia, W., Du, S., Muhammad, K., Wang, S. H. (2019). Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation. Multimedia Tools and Applications, Vol. 78, No. 3, pp. 3613–3632. DOI: 10.1007/s11042-017-5243-3. [ Links ]

29. Chu, Y., Huang, C., Xie, X., Tan, B., Kamal, S., Xiong, X. (2018). Multilayer hybrid deep-learning method for waste classification and recycling. Computational Intelligence and Neuroscience. Vol. 2018. DOI: 10.1155/2018/5060857. [ Links ]

30. Desai, Y., Dalvi, A., Jadhav, P., Baphna, A. (2018). Waste segregation using machine learning. International Journal for Research in Applied Science and Engineering Technology. Vol. 6, pp. 537–541. [ Links ]

31. Frost, S., Tor, B., Agrawal, R., Forbes, A. G. (2019). CompostNet: An image classifier for meal waste. IEEE Global Humanitarian Technology Conference, pp. 1–4. DOI: 10.1109/GHTC46095.2019.9033130. [ Links ]

32. Martínez-Blanco, J., Colón, J., Gabarrell, X., Font, X., Sánchez, A., Artola, A., Rieradevall, J. (2010). The use of life cycle assessment for the comparison of biowaste composting at home and full scale. Waste Management. Vol. 30, No. 6, pp. 983–94. DOI: 1016/j.wasman.2010.02.023. [ Links ]

33. Kucbel, M., Raclavská, H., Růžičková, J., Švédová, B., Sassmanová, V., Drozdová, J., Raclavský, K., Juchelková, D. (2019). Properties of composts from household food waste produced in automatic composters. Journal of Environmental Management, Vol. 236, pp. 657–666. DOI: 10.1016/j.jenvman.2019.02.018. [ Links ]

34. SIAP (2020). Panorama Agroalimentario 2020. Ciudad de México; 2020. pp. 200. https://nube.siap.gob.mx/gobmx_publicaciones_siap/pag/2020/AtlasAgroalimentario2020. [ Links ]

35. Ghinea, C., Apostol, L. C., Prisacaru, A. E, Leahu, A. (2019). Development of a model for food waste composting. Environmental Science and Pollution Research. Vol. 26, pp. 4056–4069. DOI: 10.1007/s11356-018-3939-1. [ Links ]

36. Sagar, N. A., Pareek, S., Sharma, S., Yahia, E. M, Lobo, M. G. (2018). Fruit and vegetable waste: bioactive compounds, their extraction, and possible utilization. Comprehensive Reviews in Food Science and Food Safety, Vol. 17, No. 3, pp. 512–531. DOI: 10.1111/1541-4337.12330. [ Links ]

37. De-Laurentiis, V., Corrado, S., Sala, S. (2018). Quantifying household waste of fresh fruit and vegetables in the EU. Waste Management, Vol. 77, pp. 238–51. DOI: 10.1016/j.wasman.2018.04.001. [ Links ]

38. Python. (2020). Python software foundation. http://www.python.org. [ Links ]

39. Bradski, G. (2000). The openCV library. Dr. Dobb’s Journal of Software Tools, https://www.elibrary.ru/item.asp?id=4934581. [ Links ]

40. Loesecke, V., Harry, W. (1950). Bananas : Chemistry, physiology, technology. Interscience Publishers, Inc, pp. 189. [ Links ]

41. Barrett, D. M., Beaulieu, J. C., Shewfelt, R. (2010). Color, flavor, texture, and nutritional quality of fresh-cut fruits and vegetables: Desirable levels, instrumental and sensory measurement, and the effects of processing. Critical Reviews in Food Science and Nutrition, Vol. 50, pp. 369–389. DOI: 10.1080/10408391003626322. [ Links ]

42. Soltani, M., Alimardani, R., Omid, M. (2011). Changes in physico-mechanical properties of banana fruit during ripening treatment. The Journal of American Science, Vol. 7, No. 5, pp. 14–19. [ Links ]

43. Sánchez-Meca, J. (2010). Cómo hacer una revisión sistemática y un meta-análisis. Aula Abierta. Vol. 38, No. 2, pp. 53–64. [ Links ]

44. R Core Team (2020). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. https://www.r-project.org/. [ Links ]

45. Raschka, S., Mirjalili, V. (2019). Python machine learning - Third edition: Machine Learning and deep learning with python, scikit-learn, and TensorFlow 2. Second. Birmingham: Packt. [ Links ]

46. Colab (2021). Google Colaboratory. Available from: https://colab.research.google.com/notebooks/welcome.ipynb?hl=es. [ Links ]

47. Fernández, M. E., Nunell, G. V., Bonelli, P. R, Cukierman, A. L. (2014). Activated carbon developed from orange peels: Batch and dynamic competitive adsorption of basic dyes. Industrial Crops and Products, Vol. 62, pp. 437–445. DOI: 10.1016/j.indcrop.2014.09.015. [ Links ]

48. Idris, I., Dan-musa, A., Musa, A., Ilu, B. M., Sada, I. (2019). Evaluation of the Influence of size reduction on methane oroduction of orange peel waste samples at ambient temperature. Journal of Applied Chemistry, Vol. 12, No. 3, pp. 14–21. [ Links ]

49. Pathak, P. D., Mandavgane, S. A., Kulkarni, B. D. (2017). Fruit peel waste: Characterization and its potential uses. Current Science Association, Vol. 113, No. 3, pp. 444–454. [ Links ]

50. Pinzón-Bedoya, M. L, Cardona, A. M. (2008). Caracterización de la cáscara de naranja para su uso como material bioadsorbente. Revista de la Facultad de Ciencias Básicas, Vol. 6, No. 1, pp. 1–23. [ Links ]

51. Tejeda-Benítez, L., Tejada-Tovar, C., Marimón-Bolívar, W., Villabona-Oriz, Á. (2014). Estudio de modificación química y física de biomasa (Citrus sinensis y Musa paradisiaca) para la adsorción de metales pesados en solución. Revista Luna Azul, No. 39, pp. 124–42. [ Links ]

52. Tejada-Tovar, C., González-Delgado, A., Villabona-Ortiz, A. (2018). Adsorption kinetics of orange peel biosorbents for Cr (VI) uptake from water. Contemporary Engineering Sciences, Vol 11, No. 24, pp. 1185–1193. DOI: 10.12988/ces.2018.83105. [ Links ]

53. Tejada-Tovar, C., Herrera-Barros, A., Villabona-Ortiz, A. (2019). Assessment of chemically modified lignocellulose waste for the adsorption of Cr (VI). Revista Facultad de Ingeniería, Vol. 29, No. 54. DOI: 10.19053/01211129.v29.n54.2020.10298. [ Links ]

54. Zoffreo, A. N. (2019). Two stage anaerobic digestion of orange peels. Worcester Polytech Institute Digit WPI. https://core.ac.uk/download/pdf/213002462.pdf. [ Links ]

55. Thomas, E. Y., Adiku, S. G., Atkinson, C. J., Omueti, J. A., Marcarthy, D. S. (2019). Evaluation of CO2 emission from rice husk biochar and cowdung manure Co-compost preparation. Journal of Agricultural Science, Vol. 11, No. 17, pp. 158. DOI: 10.5539/jas.v11n17p158. [ Links ]

56. Omar, S. Z., Hasan, A. H., Lalov, I. (2020). Potato peels and mixed grasses as raw materials for biofuel production. Aro-The Scientific Journal of Koya University, Vol. 8, No. 1, pp. 31–37. [ Links ]

57. Liang, S., McDonald, A. G. (2014). Chemical and thermal characterization of potato peel waste and its fermentation residue as potential resources for biofuel and bioproducts production. Journal of Agricultural and Food Chemistry, Vol. 62, No. 33, pp. 8421–8429. DOI: 10.1021/jf5019406. [ Links ]

58. Gautam, K., Pareek, A., Sharma, D. K. (2015). A method to utilize waste nutrient sources in aqueous extracts for enhancement of biomass and lipid content in potential green algal species for biodiesel production. Journal of Bioprocessing & Biotechniques, Vol. 5, No. 10, pp. 1-13. DOI: 10.4172/2155-9821.1000259. [ Links ]

59. Rynk, R., Van de Kamp, M., Willson, G. B., Singley, M. E., Richard, T. L., Kolega, J. J., Gouin, F. R., Laliberty, L. J., Kay, D., Murphy, D. W., Hoitink, H. A. J., Brinton, W. F. (1992). On-Farm composting handbook. Ithaca: Northeast Regional Agricultural Engineering Service. [ Links ]

60. Sial, T. A., Khan, M. N., Lan, Z., Kumbhar, F., Ying, Z., Zhang, J., Sun, D., Li, X. (2019). Contrasting effects of banana peels waste and its biochar on greenhouse gas emissions and soil biochemical properties. Process Safety and Environmental Protection, Vol. 122, pp. 366–377. DOI: 10.1016/j.psep.2018.10.030. [ Links ]

61. Rojas, A. F., Rodríguez-Barona, S., Montoya, J. (2019). Evaluación de alternativas de aprovechamiento energético y bioactivo de la cáscara de plátano. Información tecnológica, Vol. 30, No. 5, pp. 11–24. DOI: 10.4067/S0718-07642019000500011. [ Links ]

62. Rojas-González, A. F., Flórez-Montes, C., López-Rodríguez, D. F. (2018). Use prospects of some agroindustrial waste. Revista Cubana de Química, Vol. 31, No. 1, pp. 31–52. [ Links ]

63. Filter, J. (2018). Split-folders 0.4.3. 2018. http://pypi.org/project/split-folders/. [ Links ]

64. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. DOI: 10.48550/arXiv.1704.04861. [ Links ]

65. Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. pp. 1–14. DOI: 10.48550/ARXIV.1409.1556. [ Links ]

66. Seijas, C., Montilla, G., Frassato, L. (2019). Identification of rodent species using deep learning. Computación y Sistemas, Vol. 23, No. 1, pp. 257–266. DOI: 10.13053/cys-23-1-2906. [ Links ]

67. Zheng, Q. (2021). Classifying states of cooking objects using convolutional neural network. DOI: 10.48550/ARXIV.2105.14196. [ Links ]

68. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, Vol. 20, No. 1, pp. 37–46. DOI: 10.1177/001316446002000104. [ Links ]

69. Landis, J. R., Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics. Vol. 33, No. 1, pp. 159. DOI: 10.2307/2529310. [ Links ]

Received: July 02, 2022; Accepted: January 09, 2023

* Corresponding author: Joaquín Gutiérrez, e-mail: joaquing04@cibnor.mx

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License