Next Article in Journal
Consumption of Nitrogen Fertilizers in the EU—External Costs of Their Production by Country of Application
Previous Article in Journal
Transcriptomic and Metabolomic Analyses Reveal Differences in Flavonoid Synthesis During Fruit Development of Capsicum frutescens pericarp
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Model for Detecting Xanthomonas campestris Using Machine Learning Techniques Enhanced by Optimization Algorithms

by
Daniel-David Leal-Lara
1,2,*,
Julio Barón-Velandia
2,
Lina-María Molina-Parra
2 and
Ana-Carolina Cabrera-Blandón
1
1
Computer and Systems Engineering Program, Faculty of Engineering and Basic Sciences, Fundación Universitaria Los Libertadores, Bogotá 111221, Colombia
2
Faculty of Engineering, Universidad Distrital Francisco José de Caldas, Bogotá 111611, Colombia
*
Author to whom correspondence should be addressed.
Submission received: 14 November 2024 / Revised: 13 December 2024 / Accepted: 19 December 2024 / Published: 21 January 2025
(This article belongs to the Section Digital Agriculture)

Abstract

:
The bacterium Xanthomonas campestris poses a significant threat to global agriculture due to its ability to infect leaves, fruits, and stems under various climatic conditions. Its rapid spread across large crop areas results in economic losses, compromises agricultural productivity, increases management costs, and threatens food security, especially in small-scale agricultural systems. To address this issue, this study developed a model that combines fuzzy logic and neural networks, optimized with intelligent algorithms, to detect symptoms of this foliar disease in 15 essential crop species under different environmental conditions using images. For this purpose, Sugeno-type fuzzy inference systems and adaptive neuro-fuzzy inference systems (ANFIS) were employed, configured with rules and clustering methods designed to address cases where diagnostic uncertainty arises due to the imprecision of different agricultural scenarios. The model achieved an accuracy of 93.81%, demonstrating robustness against variations in lighting, shadows, and capture angles, and proving effective in identifying patterns associated with the disease at early stages, enabling rapid and reliable diagnoses. This advancement represents a significant contribution to the automated detection of plant diseases, providing an accessible tool that enhances agricultural productivity and promotes sustainable practices in crop care.

1. Introduction

The loss of crops caused by foliar diseases in plants represents a growing challenge to global food security, especially in the context of demographic expansion, which increases the demand for food [1]. This issue highlights the urgent need to implement sustainable agricultural solutions to mitigate impacts, particularly for small-scale farmers who depend on healthy crop yields and the economic stability generated by their market products [2,3].
In traditional agriculture, disease detection is carried out empirically through direct and constant observation, and its treatment largely depends on the use of pesticides [4]. This approach, besides being slow, costly, uncertain, and unreliable [5], reduces productive capacity by up to 50% due to the complexity of this task.
Conversely, timely and accurate diagnosis of plant diseases is a key component of precision agriculture [6], where non-destructive remote sensing methods have been widely used to monitor crops in visible and invisible spectra [7]. These methods offer novel agricultural solutions to improve and optimize crop yields, constituting a continuously evolving research field [8]. While not dismissing human expertise in solving complex tasks, this approach shows promise for enhancing crop productivity and protecting the environment automatically [9], reducing substantial monitoring efforts and enabling the detection of disease symptoms at early stages [10].
Thanks to advancements in active remote sensing applications and technical diagnostics based on image processing to generate detection algorithms using machine learning [11], for example, integrated methods have been developed for the detection of diseases in rice crops from smartphone-acquired images, using preprocessing in the hue saturation brightness (HSB) color space to extract the region of interest, perform image segmentation, and extract the features that allow detecting the severity stages of blight disease [12].
Similarly, ref. [13] effectively used informative regions of an image to build multiple image classification models through transfer learning, enabling the detection of 21 diseases across 14 fine-grain crops using convolutional neural networks optimized with stochastic gradient descent, achieving an accuracy of 93.05%. These studies demonstrate the effectiveness of deep learning methods for early and robust disease identification based on pattern recognition, such as detecting peach leaf diseases caused by Xanthomonas campestris using architectures like AlexNet fine-tuned with transfer learning, providing reliable agricultural diagnoses [14].
Several studies have also explored the use of machine learning algorithms, among which the use of algorithms such as the support vector machine (SVM) [15], nearest neighbor (KNN), random forest, naïve Bayes [16], and decision trees [17], which, in some cases, with the help of architectures such as hybrid-type intelligent algorithms and even optimization techniques such as particle swarm optimization, gradient descent optimizer, and function selection algorithms [18], have achieved performances between 54.1% and 99.7%.
However, other studies have shown that farmers can improve the precision and efficiency of disease detection and management by employing fuzzy logic techniques in precision agriculture, leading to higher crop yields, since the use of fuzzy logic takes advantage of the flexibility and interpretability of logic systems to handle the uncertainties and inaccuracies inherent in agricultural data [19]. In this regard, ref. [20] presented an advanced model for predicting plant leaf disease using an adaptive fuzzy expert system optimized with the cat swarm-based Harris hawks (CSHH) algorithm and data collected via IoT. The method processes leaf images and environmental data, extracting characteristics using patterns to classify diseases with optimized “if–then” rules. Validated with maize, grape, and tomato datasets, the model demonstrated an accuracy of 94.61%, significantly outperforming traditional approaches such as KNN and SVM.
Similarly, ref. [21] provided a diagnosis of apple black spot using an adaptive neuro-fuzzy system with digital camera images, achieving 89% accuracy. Meanwhile, ref. [22] used a hybrid fuzzy and k-nearest neighbor (KNN) method to detect diseases and problems in rice plants. In this approach, fuzzy logic determines the membership value of the disease detection class, while KNN identifies the closest distance between evaluated data and its k nearest neighbors in the training data, achieving 98.74% accuracy in tests conducted with 200 cases of 13 diseases and pests.
In this context, this study proposes a foliar disease detection model caused by Xanthomonas campestris, based on the integration of fuzzy inference systems optimized with machine learning algorithms; the objective was to develop a system that is not only accurate in detecting the disease but also interpretable and accessible to end users, allowing informed decision making. This is particularly important in agriculture, where confidence in automated systems depends heavily on the ability of users to understand and verify the results.
For the development of this article, the acquisition of the data, their processing to optimize results, and the configuration of four fuzzy and neuro-fuzzy systems were initially specified, which were subsequently compared in their testing and training stages, allowing to determine the model that provides a faster and more reliable diagnosis based on statistical tests with their respective results.

2. Materials and Methods

2.1. Data Acquisition

The method of detecting plant diseases from images dates back to the 1980s, when one of the first prominent studies was conducted in the USA, where researchers proposed solutions to reduce crop wilt losses by using color infrared photography to detect infections in soybean crops [23]. Early applications involved pattern detection algorithms that combined remote sensing with symptom-based diagnostic techniques, yielding accurate and reliable results [11].
To develop the proposed model, a dataset of 1471 images was compiled covering 15 plant species susceptible to infections by various subspecies of Xanthomonas campestris and commonly grown on a large scale. These species include banana (Musa x paradisiaca), radish (Raphanus sativus), walnut (Juglans regia), tomato (Solanum lycopersicum), soybean (Glycine max), pumpkin (Cucurbitaceae), plum (Prunus domestica), pepper (Capsicum annuum), peach (Prunus persica), mango (Mangifera indica), hazelnut (Corylus avellana), cabbage (Brassica oleracea), broccoli (Brassica oleracea var. italica), cauliflower (Brassica oleracea var. gemmifera), and bean (Phaseolus vulgaris).
These species were selected because they are essential crops in world agriculture, as many of them play a key role in food security due to their high production, nutritional value, and adaptability to diverse regions and climatic conditions. Therefore, the images used represented both healthy and diseased leaves under different environmental conditions, including variations in lighting, angles, shadows, and brightness levels, simulating real-world agricultural scenarios.
In healthy conditions, the leaves of these species are predominantly green, which facilitates the early detection of disease symptoms, provided the affected foliar area exhibits between 10% and 15% visible symptoms. Moreover, the diversity in the shape, size, and texture of the leaves allows the model to identify specific patterns associated with infection, enhancing its accuracy and robustness in analyzing a wide range of foliar morphologies in diverse agricultural contexts.
The dataset used includes images of both healthy and diseased leaves (see Figure 1) captured with a digital camera under controlled conditions, varying in terms of lighting, angles, shadows, and brightness. Additionally, images from the public PlantVillage dataset (https://plantvillage.psu.edu/plants, accessed on 15 January 2022) were incorporated, which proved particularly valuable due to its unique characteristics, such as standardized backgrounds, consistent sharpness conditions, and broad representativeness of crop features [24]. These attributes not only strengthen the dataset’s quality but also improve the model’s generalization by including diverse scenarios. Furthermore, PlantVillage is a publicly available resource widely used in agricultural research, facilitating the development of studies based on advanced image processing techniques [25].

2.2. Preprocessing

The pattern recognition focused on regions with specific symptoms, including small, irregularly shaped spots (1–5 mm in diameter) with black veins and necrotic centers on leaves and stems. Additional symptoms documented by [26] include yellow halos and brown lesions visible from the early stages of infection.
During image collection, specific segments were cropped in various sizes (see Figure 2) to ensure that each input to the model excluded backgrounds that could confuse or overwhelm it with unnecessary information, thus optimizing the training process. The images were stored in the RGB color model to preserve variations in lighting, contrast, angle, and shadow, enabling the model to recognize different leaf characteristics at various times of the day. This approach allows the model to assess a color spectrum associated with disease symptoms in infected leaves, evaluating each pixel and identifying colors corresponding to the disease’s natural presentation.
Studies such as [27] have suggested that preprocessing images used for training machine learning models can significantly enhance the visual quality of input images, optimizing color analysis of disease symptoms and, consequently, improving diagnostic accuracy. Thus, following the acquisition and cropping of images, a color transformation was performed on each image, converting from the RGB model to the normalized HSB model. This change was necessary because RGB values are highly sensitive to variations in lighting conditions, contrast, and shadows, which can introduce significant noise and hinder the accurate identification of disease-related patterns [4]. The HSB model, unlike RGB, addresses these limitations by decomposing color into more interpretable components: hue, which defines the chromatic nature (for example, red, yellow, or brown); saturation, which allows differentiation between advanced symptoms (intense colors) and initial symptoms (less saturated colors); and brightness, which reflects the luminosity of the color. This approach enhances the robustness of the model by mitigating the impact of variable lighting conditions through the establishment of thresholds, as shown in Figure 3, where the assignment of a color scale based on yellow, red, and brown tones allowed for the identification of disease symptoms across the different species evaluated.
After processing, the input model enhances feature analysis for training by utilizing various color spaces and parameters. This approach enables the system to distinguish plants without the characteristic disease spots and whose color falls within the range assigned to healthy leaves. Additionally, it improves the model’s ability to recognize patterns associated with the disease, supporting more accurate decision making in identifying healthy versus diseased leaves.

2.3. Model Configuration

Machine learning models are extensively utilized to extract critical crop parameters for prediction. For example, ref. [28] developed a neural network comprising an input layer, a membership function layer, a rule layer, and an output layer that can predict crop yield sustainably. In configuring the proposed model, two systems are holistically applied and characterized by precision and interpretability.
These systems address imprecise problems for which solutions are often complicated or even impossible to find [29]. The first system is a Sugeno-type inference system for classification. The second is an adaptive neuro-fuzzy inference system (ANFIS) executed on the training data to enhance the model’s accuracy and generalization capacity. Additionally, an intelligent hybrid optimization mechanism and the interior point algorithm are incorporated to improve problem-solving capabilities and overall model performance, effectively combining the advantages of fuzzy logic with machine learning.

2.3.1. Sugeno Fuzzy System Configuration

The Sugeno fuzzy system addresses classification problems through fuzzy rules with output functions that are typically linear or constant, enabling interoperability and optimal results in applications [30]. In the proposed model, an initial Sugeno system was developed to detect, at the pixel level on the HSB scale, the specific features associated with the color tone of Xanthomonas campestris disease. This output allows a second system with similar characteristics to make decisions regarding the leaf’s condition based on the overall results of an image.
The configuration of this first system relies on three fuzzy sets for the input, each structured with three membership functions representing low, medium, and high values on the HSB scale (where H corresponds to hue, S to saturation, and B to brightness). A default linear function is defined for the output, facilitating the modeling of the relationship between input values and system response. This optimization enhances the inferential process and improves decision-making accuracy based on the analyzed visual characteristics.
To compare the accuracy of the systems, configurations were tested both with and without clustering, which identifies groups with similar characteristics within a dataset, aiding in segmentation and analysis, as demonstrated by [31]. In systems utilizing fuzzy clustering, rules are automatically defined, and three fuzzy clustering rules are executed, ensuring that each input is uniformly related to the outputs, with a maximum of 27 rules. For the system implemented without fuzzy clustering, an exploratory analysis of the different HSB values and their significance for identifying healthy or diseased pixels was conducted based on the following rules:
  • If the value of H is low, S is low, and B is low, then it is healthy;
  • If the value of H is low, S is medium, and B is low, then it is healthy;
  • If the value of H is low, S is medium, and B is medium, then it is healthy;
  • If the value of H is low, S is medium, and B is high, then it is sick;
  • If the value of H is medium, S is medium, and B is high, then it is sick;
  • If the value of H is low, S is high, and B is high, then it is sick;
  • If the value of H is medium, S is high, and B is high, then it is sick.
Table 1 describes the settings made for Sugeno inference systems without clustering. It lists the value given to each one within the MATLAB sugfis function, using MATLAB version R2023b.
For those systems implemented using fuzzy clustering, the following configuration is the one observed in Table 2. These parameters, in turn, were entered into the MATLAB genfis function.
To finalize the configuration of this first system, three neuro-fuzzy sets were utilized for the input, corresponding to different pixel-level values in the HSB color scale. Each set includes three membership functions, consistent with the configuration defined in the clusters, and an output set that determines the pixel’s state (healthy or sick) based on its structure, as shown in Figure 4. The Rule layer, represented by blue circles, integrates the input membership functions using logical operations (e.g., AND, OR) to evaluate the fuzzy rules and generate corresponding outputs. The black circles symbolize crisp (exact) values for both the inputs and the final output, while the white circles represent intermediate fuzzy values, such as degrees of membership for the inputs and outputs. This framework enhances the interpretation and classification of the data, facilitating pattern identification indicative of crop health and ensuring an effective response to variations in color values.
Figure 5 presents the rules and their correspondence with the membership functions. This representation illustrates how the rules establish the relationship between inputs and outputs, reflecting the results through the linear function based on the different values that the input membership functions can assume (HSB values). Additionally, the output neuro-fuzzy surface is visualized, depicting in a three-dimensional graph how variations in the inputs—individually or in combination—affect the system’s output. In this graph, the axes represent the input variables, while the vertical axis shows the corresponding output value, providing a clear understanding of the system’s behavior within the sample space.

2.3.2. Adaptive Neuro-Fuzzy Inference System (ANFIS) Configuration

Based on the average (avg) results obtained at the pixel level from the first system, the second system’s configuration considers the model’s capacity to determine whether the disease Xanthomonas campestris is present in the entire image. This system comprises five membership functions representing low, medium-low, medium, medium-high, and high values. Similar to the first system, a linear function is assigned for the output by default.
In this case, configurations with and without clustering were also executed. For the systems utilizing fuzzy clustering, rules are defined as in the first system; however, five fuzzy clustering rules are executed. For the system implemented without fuzzy clustering, an exploratory analysis of the different HSB values and their significance in identifying healthy or diseased pixels was conducted based on the following rules:
  • If the avg value is high, then it is healthy;
  • If the avg value is medium-high, then it is healthy;
  • If the avg value is medium, then it is healthy;
  • If the avg value is medium-low, then it is sick;
  • If the avg value is low, then it is sick.
Neuro-fuzzy inference systems are created after developing fuzzy inference systems, which combine artificial neural networks and fuzzy logic to facilitate decision making based on imprecise or uncertain information. This capability stems from their sensitivity to the definition of membership functions [32]. Additionally, this system employs a hybrid learning procedure that builds an input–output map based on human knowledge, effectively describing how values can belong to different categories, thereby considerably reducing modeling time [33].
For its configuration, a maximum limit of 600 training phases was established, and, similar to the fuzzy inference system, configurations with and without clustering were executed under the same parameters. This system used a backpropagation optimization method for the input, while least squares were applied for the output. Validation was performed with various input and expected output test values, which helped avoid overfitting in each case.
Finally, the neuro-fuzzy configuration of the second system is based on an input set that represents the average of the values obtained from an image previously evaluated by the first system. This system’s structure utilizes a single input set with five membership functions, relating the input data to the output, as shown in Figure 6. The black dots represent the crisp (exact) values for the input and output, while the white dots correspond to intermediate fuzzy values, such as the degrees of membership of the input data and the output state. This configuration provides a rapid and accurate diagnosis of the leaf’s condition regarding the presence of Xanthomonas campestris, facilitating early disease identification and contributing to more efficient crop management.

2.4. Optimization

To minimize information loss in the images and ensure the most accurate results from a rapid diagnosis, the model was optimized by implementing a hybrid algorithm. The parameters of this algorithm are specified in Table 3 and were established to enable the model to identify candidate points from healthy pixels. This was achieved by dilating the candidate points twice without utilizing the pixels obtained from an interior point algorithm after the genetic algorithm execution.

3. Results

Once the variable selection process and the configuration of the neuro-fuzzy systems were defined, four models were developed: two for a system that evaluates the state of the image from pixel-level input on an HSB scale, using either fuzzy clustering in one case or defined rules in the other. The remaining two models determine the diagnosis of the leaf based on the average of all pixels in an image processed by the first system, with established ranges for diseased (0.3884 to 0.54) and healthy (0.55 to 0.70) states.
Each model was run 50 times, utilizing 70% of the dataset for training and the remaining 30% for testing. In the first validation, the results were compared in terms of maximum, minimum, mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) for the training and validation datasets. These results are summarized in Table 4 and Table 5, highlighting the executions with the lowest squared error and identifying the best-performing execution in each case. It is important to note that some models may converge to local minima rather than reaching the global minimum.
To determine the system with the best accuracy and performance, particularly when using clusters, the results were evaluated in terms of maximum and minimum errors, mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) in the training and validation datasets. Additionally, a Kruskal–Wallis test was conducted to compare the medians of the data groups and assess whether the samples originated from the same population. Finally, a confusion matrix was utilized for the best-performing model to quantitatively evaluate its performance, aiding in identifying errors across all predictions.

3.1. Training

During training, images exhibiting the most recognizable disease patterns were selected to avoid redundancy and confusion that could adversely affect the resulting diagnosis. This approach ensured that the learning process was based on representative and diverse examples. Subsequently, 50 runs of the two systems were conducted, yielding the results summarized in Table 6, which presents the statistical values and highlights the run that achieved the best outcome.
To determine the accuracy with which cluster-based systems make predictions in relation to actual outcomes and to evaluate the performance of their configurations, their results were analyzed considering simulated and real values. This analysis made it possible to identify potential error patterns, points of confusion in classifications, and assess model efficiency, highlighting areas for improvement and adjustments during training to optimize performance and ensure reliable diagnostics.
In the case of the first system, during the training phase, the expected data for a healthy pixel showed a value close to 0.62, while the simulated values for diseased pixels approximated 0.3884. Based on this, the mean squared error (MSE) recorded was 1.62 × 10−3, indicating minimal differences between the simulation and expected values. This suggests that numerically, error metrics such as MAE and RMSE being low are clear indicators of a well-calibrated predictive model. Although certain iterations may yield results significantly different from real values, these appear to be sporadic and do not significantly impact the overall stability of the model.
On the other hand, the performance of the second system during the training phase was evaluated by comparing the average of the values generated by the first system and determining the error based on the prediction of the plant’s condition. In this case, an MSE of 7.21 × 10−3 was recorded, indicating no large recurring errors, and most simulated values remain close to the actual ones. This suggests that the model not only follows trends effectively but also exhibits low error dispersion, maintaining a low average error rate and robustness against anomalies.

3.2. Test

For the test phase, images were randomly selected from healthy samples and those infected with Xanthomonas campestris that had not been used in model training. This approach was taken to more accurately verify the model’s ability to recognize previously unseen patterns. As in the training phase, 50 runs were conducted, resulting in the statistical data summary presented in Table 7 and providing an approximation of the actual accuracy of the systems.
As in the training phase, the testing phase also evaluated the accuracy of the cluster-based systems’ predictions in relation to the results. To assess the performance of their configurations, their results were analyzed considering simulated and actual values.
For the first system, the expected value for healthy pixel data remained consistent, being close to 0.62. Meanwhile, the behavior resulting from comparing simulated and expected values for pixels representing diseased states approximated 0.3884. In this phase, the mean squared error (MSE) graph indicates a value of 8.30 × 10−4, showing better performance compared to the training phase. However, the simulated data exhibited significant variability, representing the potential detection of uncommon events. This variability suggests the model’s sensitivity to factors affecting performance, demonstrating potential for adjustment and improved accuracy in future iterations.
Finally, the performance of the second system was evaluated by comparing the simulated results with actual values. In this case, the MSE, equivalent to 6.18 × 10−3, indicates that although there were some significant differences between the actual and simulated values at specific points, the mean of the squared differences remained relatively small. This means that the simulated values are not far from the actual ones, despite some variations, consolidating a reasonably good model. Additionally, the low MAE suggests that, in absolute terms, the average differences between simulated and actual values are small. While the model does not always predict specific points with precision, the deviations are generally minor and fall within an acceptable range, which is a positive aspect of the model.
Overall, the system in the testing phase performed well in capturing the general trend of the actual data. The average error was small, meaning that although the system cannot perfectly follow abrupt changes or exceptional events, it does not make extremely large errors. These results highlight the robustness of the model and its ability to generalize, which is crucial for its application in real-world agricultural diagnostic scenarios.

3.3. Validation of the Best Model

Based on the results obtained, the first and second neuro-fuzzy systems with clustering were identified as the best-performing models. To validate this conclusion, a Kruskal–Wallis test was conducted. This test, which does not require the data to follow a normal distribution, provides greater versatility for diverse data types by comparing the medians of data groups to determine if the samples originate from the same population, verifying equidistribution. In other words, it assesses whether, after applying optimization algorithms, the model results for the training data and their expected values belong to the same population.
The test was conducted at a 0.05 significance level and yielded a p-value of 0.646. Since this value is above the threshold, we conclude that the samples are equidistributed and belong to the same distribution. Thus, the null hypothesis cannot be rejected, suggesting that the model employing fuzzy clustering produces consistent and reliable results.

3.4. Overall Model Performance

After achieving optimal results with the clustered neuro-fuzzy inference model, enhanced by a hybrid intelligent algorithm, the model’s configuration was evaluated using a confusion matrix applied to the entire dataset, as shown in Figure 7. This matrix demonstrates that the model attained an overall precision of 92.34%, a sensitivity of 95.28%, a specificity of 92.40%, and an accuracy of 93.81%. These metrics underscore the model’s effectiveness in detecting the disease, with a high degree of reliability in its predictions. The high sensitivity indicates the model’s strength in accurately identifying positive cases, while the specificity highlights its competence in correctly classifying negative cases.

4. Discussion

In recent years, research focused on artificial vision and pattern recognition has shifted towards automating the disease detection process in crops [34], which has led to the development of systems capable of identifying leaf diseases with less manual intervention [35] and able to diagnose a variety of crop diseases accurately and quickly, providing reliable and fast results through computerized detections [36]. This advancement is especially valuable in crops susceptible to foliar diseases such as blight, which can spread rapidly, causing plant mortality and significantly reducing agricultural productivity [37].
The results obtained in this study highlight the potential of combining fuzzy logic with neural networks, demonstrating that their use is not only effective for modeling complex systems with uncertainty but also for adapting to different agricultural scenarios. This adaptability is due to their ability to integrate environmental factors such as light variations, which facilitates early detection of diseases like Xanthomonas campestris even in the initial stages. Furthermore, the configurability of the ANFIS model allows its application in both crop-specific diseases and multiple species, showing consistent results across various scenarios.
In comparison with other advanced architectures, such as DenseNet (average accuracy of 97%) [38] and advanced multitask networks like VGG16 (accuracy of up to 98.75%) [39], the ANFIS model offers a more interpretable and adaptable alternative. This advantage is particularly relevant in crops with limited historical data, where deep learning techniques require extensive retraining, thus increasing both costs and implementation times.
This approach is particularly relevant for small-scale agriculture, where access to advanced technologies and diagnostic experts is limited, while ANFIS provides a more accessible and sustainable solution that can integrate with low-cost technologies such as basic remote sensors, representing a significant advance in the automation of crop monitoring. Additionally, the use of images as input facilitates its implementation in real-time monitoring systems, as the model demonstrated a diagnostic accuracy of 93.8%, significantly increasing the reliability of crop monitoring and providing farmers with an accessible tool for rapid intervention, substantially reducing their losses.
This advancement particularly benefits empirical farmers, as it reduces their dependence on experts for crop care and monitoring, optimizing decision making and contributing to a more efficient resource management, which can improve agricultural productivity.
However, it is important to note that, although the model showed high performance under the evaluated conditions, its effectiveness may be affected by factors such as image quality and extreme variations in environmental conditions. Furthermore, while 15 different species were used in the dataset, it would be beneficial to expand the species variety to explore the model’s performance across a wider range of crops.
Finally, it is important to emphasize that machine learning-based techniques have considerable potential to be used for the dual purpose of increasing crop yield and reducing pesticide use, especially as the global population continues to grow. Therefore, it is essential to continue working on various research avenues that address these challenges. Future research could focus on integrating this model into mobile platforms or drones for field use, which would allow farmers to monitor crops more efficiently and accurately. It may also be possible to explore the integration of this approach with other monitoring systems based on sensors, such as humidity or temperature, to provide a more complete and accurate diagnosis of crop conditions.

5. Conclusions

This study demonstrated the effectiveness of clustered neuro-fuzzy inference models for detecting the disease caused by Xanthomonas campestris in crops. By incorporating fuzzy clustering, two optimized neuro-fuzzy systems identified as the best models were implemented, and the model showed a strong capacity to generate accurate and reliable predictions.
This model is generally based on pattern recognition and digital image processing, capturing data from various sources, including specialized and RGB model images, under random lighting, brightness, and contrast conditions. By subsequently applying the HSB color model and cropping images into specific sections, it provides a diagnosis of the leaves, achieving an accuracy of 93.81%, 92.34% precision, 95.28% sensitivity, and 92.40% specificity. This performance allows users to access a reliable model capable of optimizing the care process and enhancing the productivity of 15 crop species.
Beyond accuracy, the model’s interpretability is a crucial feature of this research. Unlike other deep neural network approaches, the fuzzy inference system allows users to understand the decision-making processes within the model. This transparency is essential for building farmer confidence in the technology and facilitating its adoption.
The study highlights the adaptability of the model across various crop species and its ability to handle diverse environmental conditions. This feature suggests potential for integration into precision agriculture tools like drones and IoT devices, enabling real-time disease monitoring and significantly reducing manual intervention.
By providing an interpretable and accessible diagnostic model, the approach empowers farmers to make informed decisions. This leads to more effective resource management, including optimized pesticide application, contributing to sustainable agricultural practices and reduced environmental impact
However, the study also identified certain limitations that should be addressed in future work. For example, the model’s reliance on high-quality images and the need for extensive preprocessing may limit its applicability in settings where these resources are not readily available. Therefore, to maximize its impact, solutions should be developed that can integrate with other advanced agricultural technologies, such as drones and real-time sensors, to provide farmers with more holistic and effective responses.

Author Contributions

Conceptualization, D.-D.L.-L. and J.B.-V.; methodology, D.-D.L.-L.; software, D.-D.L.-L.; validation, D.-D.L.-L., J.B.-V. and L.-M.M.-P.; formal analysis, D.-D.L.-L.; investigation, D.-D.L.-L.; resources, D.-D.L.-L.; data curation, L.-M.M.-P.; writing—original draft preparation, D.-D.L.-L.; writing—review and editing, A.-C.C.-B. and L.-M.M.-P.; visualization, D.-D.L.-L.; supervision, J.B.-V.; project administration, J.B.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding authors.

Acknowledgments

We would like to thank the Universidad Distrital Francisco José de Caldas and Fundación Universitaria Los Libertadores for their administrative and technical support throughout this research. Their resources and guidance were invaluable in the successful completion of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Metre, V.A.; Sawarkar, S.D. Reviewing Important Aspects of Plant Leaf Disease Detection and Classification. In Proceedings of the 2022 International Conference for Advancement in Technology (ICONAT), Goa, India, 21–22 January 2022; pp. 1–8. [Google Scholar]
  2. Sujatha, R.; Chatterjee, J.M.; Jhanjhi, N.Z.; Brohi, S.N. Performance of Deep Learning vs Machine Learning in Plant Leaf Disease Detection. Microprocess. Microsyst. 2021, 80, 103615. [Google Scholar] [CrossRef]
  3. Yadav, R.; Kumar, Y.; Nagpal, S. Plant Leaf Disease Detection and Classification Using Particle Swarm Optimization; Springer Nature: Cham, Switzerland, 2019; pp. 294–306. [Google Scholar] [CrossRef]
  4. Sharma, P.; Berwal, Y.P.S.; Ghai, W. Performance Analysis of Deep Learning CNN Models for Disease Detection in Plants Using Image Segmentation. Inf. Process. Agric. 2020, 7, 566–574. [Google Scholar] [CrossRef]
  5. Rashid, J.; Khan, I.; Ali, G.; Almotiri, S.H.; AlGhamdi, M.A.; Masood, K. Multi-Level Deep Learning Model for Potato Leaf Disease Recognition. Electronics 2021, 10, 2064. [Google Scholar] [CrossRef]
  6. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  7. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci. 2016, 2016, 3289801. [Google Scholar] [CrossRef]
  8. Kerkech, M.; Hafiane, A.; Canals, R. VddNet: Vine Disease Detection Network Based on Multispectral Images and Depth Map. Remote Sens. 2020, 12, 3305. [Google Scholar] [CrossRef]
  9. Kocian, A.; Incrocci, L. Learning from Data to Optimize Control in Precision Farming. Stats 2020, 3, 239–245. [Google Scholar] [CrossRef]
  10. Jackulin, C.; Murugavalli, S. A Comprehensive Review on Detection of Plant Disease Using Machine Learning and Deep Learning Approaches. Meas. Sens. 2022, 24, 100441. [Google Scholar] [CrossRef]
  11. Shin, J.; Chang, Y.K.; Heung, B.; Nguyen-Quang, T.; Price, G.W.; Al-Mallahi, A. Effect of Directional Augmentation Using Supervised Machine Learning Technologies: A Case Study of Strawberry Powdery Mildew Detection. Biosyst. Eng. 2020, 194, 49–60. [Google Scholar] [CrossRef]
  12. Abu Bakar, M.N.; Abdullah, A.H.; Abdul Rahim, N.; Yazid, H.; Misman, S.N.; Masnan, M.J. Rice Leaf Blast Disease Detection Using Multi-Level Colour Image Thresholding. J. Telecommun. Electron. Comput. Eng. (JTEC) 2018, 10, 1–6. [Google Scholar]
  13. Yang, G.; He, Y.; Yang, Y.; Xu, B. Fine-Grained Image Classification for Crop Disease Based on Attention Mechanism. Front. Plant Sci. 2020, 11, 600854. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, K.; Xu, Z.; Dong, S.; Cen, C.; Wu, Q. Identification of Peach Leaf Disease Infected by Xanthomonas campestris with Deep Learning. Eng. Agric. Environ. Food 2019, 12, 388–396. [Google Scholar] [CrossRef]
  15. Van De Vijver, R.; Mertens, K.; Heungens, K.; Somers, B.; Nuyttens, D.; Borra-Serrano, I.; Lootens, P.; Roldán-Ruiz, I.; Vangeyte, J.; Saeys, W. In-Field Detection of Alternaria Solani in Potato Crops Using Hyperspectral Imaging. Comput. Electron. Agric. 2020, 168, 105106. [Google Scholar] [CrossRef]
  16. Kasinathan, T.; Singaraju, D.; Uyyala, S.R. Insect Classification and Detection in Field Crops Using Modern Machine Learning Techniques. Inf. Process. Agric. 2021, 8, 446–457. [Google Scholar] [CrossRef]
  17. Poblete, T.; Camino, C.; Beck, P.S.A.; Hornero, A.; Kattenborn, T.; Saponari, M.; Boscia, D.; Navas-Cortes, J.A.; Zarco-Tejada, P.J. Detection of Xylella fastidiosa Infection Symptoms with Airborne Multispectral and Thermal Imagery: Assessing Bandset Reduction Performance from Hyperspectral Analysis. ISPRS J. Photogramm. Remote Sens. 2020, 162, 27–40. [Google Scholar] [CrossRef]
  18. Cruz, A.; Ampatzidis, Y.; Pierro, R.; Materazzi, A.; Panattoni, A.; De Bellis, L.; Luvisi, A. Detection of Grapevine Yellows Symptoms in Vitis vinifera L. with Artificial Intelligence. Comput. Electron. Agric. 2019, 157, 63–76. [Google Scholar] [CrossRef]
  19. Sinha, M.; Tiwary, R. Utilizing Fuzzy Logic in Precision Agriculture: Techniques for Disease Detection and Management. J. Stat. Math. Eng. 2024, 10, 35–40. [Google Scholar] [CrossRef]
  20. Karande, M.U.; Satarkar, S.L. Optimized Adaptive Fuzzy Expert System-Based Plant Leaf Disease Prediction Model Using Data Through Internet of Things. Migr. Lett. 2024, 21, 339–361. [Google Scholar]
  21. Omrani, E.; Khoshnevisan, B.; Shamshirband, S.; Saboohi, H.; Anuar, N.B.; Nasir, M.H.N.M. Potential of Radial Basis Function-Based Support Vector Regression for Apple Disease Detection. Measurement 2014, 55, 512–519. [Google Scholar] [CrossRef]
  22. Anamisa, D.R.; Satoto, B.D.; Yusuf, M.; Sophan, M.K.; Alamsyah, N.; Muntasa, A. Web-Based Rice Pest and Diseases Detection Using Hybrid Fuzzy and K-Nearest Neighbor Methods. In Proceedings of the 2022 6th International Conference on Informatics and Computational Sciences (ICICoS), Semarang, Indonesia, 28–29 September 2022; pp. 54–59. [Google Scholar]
  23. Kerkech, M.; Hafiane, A.; Canals, R. Vine Disease Detection in UAV Multispectral Images Using Optimized Image Registration and Deep Learning Segmentation Approach. Comput. Electron. Agric. 2020, 174, 105446. [Google Scholar] [CrossRef]
  24. Xiong, Y.; Liang, L.; Wang, L.; She, J.; Wu, M. Identification of Cash Crop Diseases Using Automatic Image Segmentation Algorithm and Deep Learning with Expanded Dataset. Comput. Electron. Agric. 2020, 177, 105712. [Google Scholar] [CrossRef]
  25. Hernández, S.; López, J.L. Uncertainty Quantification for Plant Disease Detection Using Bayesian Deep Learning. Appl. Soft Comput. J. 2020, 96, 106597. [Google Scholar] [CrossRef]
  26. Matsunaga, T.M.; Ogawa, D.; Taguchi-Shiobara, F.; Ishimoto, M.; Matsunaga, S.; Habu, Y. Direct Quantitative Evaluation of Disease Symptoms on Living Plant Leaves Growing under Natural Light. Breed. Sci. 2017, 67, 316–319. [Google Scholar] [CrossRef] [PubMed]
  27. Gargade, A.; Khandekar, S. Custard Apple Leaf Parameter Analysis, Leaf Diseases, and Nutritional Deficiencies Detection Using Machine Learning. In Advances in Signal and Data Processing: Select Proceedings of ICSDP 2019; Springer: Singapore, 2021; pp. 57–74. [Google Scholar]
  28. Elavarasan, D.; Durai Raj Vincent, P.M. Fuzzy Deep Learning-Based Crop Yield Prediction Model for Sustainable Agronomical Frameworks. Neural Comput. Appl. 2021, 33, 13205–13224. [Google Scholar] [CrossRef]
  29. Esmaili, M.; Aliniaeifard, S.; Mashal, M.; Vakilian, K.A.; Ghorbanzadeh, P.; Azadegan, B.; Seif, M.; Didaran, F. Assessment of Adaptive Neuro-Fuzzy Inference System (ANFIS) to Predict Production and Water Productivity of Lettuce in Response to Different Light Intensities and CO2 Concentrations. Agric. Water Manag. 2021, 258, 107201. [Google Scholar] [CrossRef]
  30. Wieczynski, J.; Lucca, G.; Borges, E.; Urio-Larrea, A.; Molina, C.L.; Bustince, H.; Dimuro, G. Application of the Sugeno Integral in Fuzzy Rule-Based Classification. Appl. Soft Comput. 2024, 167, 112265. [Google Scholar] [CrossRef]
  31. Zhang, S.; You, Z.; Wu, X. Plant Disease Leaf Image Segmentation Based on Superpixel Clustering and EM Algorithm. Neural Comput. Appl. 2019, 31, 1225–1232. [Google Scholar] [CrossRef]
  32. Sarlaki, E.; Sharif Paghaleh, A.; Kianmehr, M.H.; Asefpour Vakilian, K. Valorization of Lignite Wastes into Humic Acids: Process Optimization, Energy Efficiency and Structural Features Analysis. Renew. Energy 2021, 163, 105–122. [Google Scholar] [CrossRef]
  33. Yang, J.; Chen, T.; Chen, L.; Zhao, J. Towards A Clustering Guided Rule Interpolation for ANFIS Construction. In Proceedings of the 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Yokohama, Japan, 30 June–5 July 2024; pp. 1–6. [Google Scholar]
  34. Munjal, D.; Singh, L.; Pandey, M.; Lakra, S. A Systematic Review on the Detection and Classification of Plant Diseases Using Machine Learning. Int. J. Softw. Innov. (IJSI) 2023, 11, 1–25. [Google Scholar] [CrossRef]
  35. Nagi, R.; Tripathy, S.S. Plant Disease Identification Using Fuzzy Feature Extraction and PNN. Signal Image Video Process 2023, 17, 2809–2815. [Google Scholar] [CrossRef]
  36. Ayoub Shaikh, T.; Rasool, T.; Rasheed Lone, F. Towards Leveraging the Role of Machine Learning and Artificial Intelligence in Precision Agriculture and Smart Farming. Comput. Electron. Agric. 2022, 198, 107119. [Google Scholar] [CrossRef]
  37. V, S.; Bhagwat, A.; Laxmi, V.; Shrivastava, S. A Custom Backbone UNet Framework with DCGAN Augmentation for Efficient Segmentation of Leaf Spot Diseases in Jasmine Plant. J. Comput. Netw. Commun. 2024, 2024, 5057538. [Google Scholar] [CrossRef]
  38. Jiang, M.; Feng, C.; Fang, X.; Huang, Q.; Zhang, C.; Shi, X. Rice Disease Identification Method Based on Attention Mechanism and Deep Dense Network. Electronics 2023, 12, 508. [Google Scholar] [CrossRef]
  39. Jiang, Z.; Dong, Z.; Jiang, W.; Yang, Y. Recognition of Rice Leaf Diseases and Wheat Leaf Diseases Based on Multi-Task Deep Transfer Learning. Comput. Electron. Agric. 2021, 186, 106184. [Google Scholar] [CrossRef]
Figure 1. Set of images of leaves of different crops under different climatic and geographical conditions: leaves of (a) diseased and (b) healthy plants.
Figure 1. Set of images of leaves of different crops under different climatic and geographical conditions: leaves of (a) diseased and (b) healthy plants.
Agriculture 15 00223 g001
Figure 2. Pixel-based pattern recognition according to an associated color scale in a diseased plant.
Figure 2. Pixel-based pattern recognition according to an associated color scale in a diseased plant.
Agriculture 15 00223 g002
Figure 3. Color transformation to HSB model: leaves with (a) diseased and (b) healthy parts.
Figure 3. Color transformation to HSB model: leaves with (a) diseased and (b) healthy parts.
Agriculture 15 00223 g003
Figure 4. Structure of the ANFIS neuro-fuzzy network for the first system: (a) general configuration of the neuro-fuzzy sets and their membership functions; (b) architecture of the ANFIS neuro-fuzzy network for classification.
Figure 4. Structure of the ANFIS neuro-fuzzy network for the first system: (a) general configuration of the neuro-fuzzy sets and their membership functions; (b) architecture of the ANFIS neuro-fuzzy network for classification.
Agriculture 15 00223 g004
Figure 5. Components for the first neuro-fuzzy inference system: (a) rule set; (b) output neuro-fuzzy surface.
Figure 5. Components for the first neuro-fuzzy inference system: (a) rule set; (b) output neuro-fuzzy surface.
Agriculture 15 00223 g005
Figure 6. Configuration of the second neuro-fuzzy inference system: (a) rule set; (b) architecture of the ANFIS neuro-fuzzy network for diagnosis.
Figure 6. Configuration of the second neuro-fuzzy inference system: (a) rule set; (b) architecture of the ANFIS neuro-fuzzy network for diagnosis.
Agriculture 15 00223 g006
Figure 7. Confusion matrix of the best-performing model.
Figure 7. Confusion matrix of the best-performing model.
Agriculture 15 00223 g007
Table 1. Sugeno fuzzy inference parameter settings without clustering.
Table 1. Sugeno fuzzy inference parameter settings without clustering.
ParameterValue Without Fuzzy Clustering
Fuzzy operator AND methodThe product of fuzzy input values
Fuzzy operator OR methodThe maximum fuzzy input values
Defuzzification method to calculate crisp output valuesWeighted average of all rule outputs
Implication method to calculate the consequent fuzzy setScales the consequent membership function by the value of the antecedent result
Aggregation method to combine rule consequentsSum of consequent fuzzy sets
Table 2. Sugeno fuzzy inference parameter settings with clustering.
Table 2. Sugeno fuzzy inference parameter settings with clustering.
ParameterValue with Fuzzy Clustering
Fuzzy operator AND methodThe product of fuzzy input values
Fuzzy operator OR methodProbabilistic OR of fuzzy input values
Defuzzification method to calculate crisp output valuesWeighted average of all rule outputs
Implication method to calculate the consequent fuzzy setScales the consequent membership function by the value of the antecedent outcome
Aggregation method to combine rule consequentsSum of consequent fuzzy sets
Table 3. Parameters for the implementation of the genetic algorithm.
Table 3. Parameters for the implementation of the genetic algorithm.
ParameterValue
Population size100
Maximum number of iterations before the algorithm stops100
Individuals in the current generation that are guaranteed to survive to the next generation2
Fraction of the population in the next generation, not including those guaranteed survival, that creates the crossover function0.8
Time for the algorithm to stop after running in seconds300
Table 4. Error values of the first neuro-fuzzy system.
Table 4. Error values of the first neuro-fuzzy system.
First ANFIS System
ExecutionTrainingValidation MSE ¯
MSERMSEMAEMINMAXMSERMSEMAEMINMAX
Clustered44.03 × 10−36.35 × 10−22.31 × 10−52.53 × 10−79.35 × 10−12.57 × 10−35.07 × 10−22.34 × 10−51.51 × 10−71.083.30 × 10−3
61.62 × 10−34.02 × 10−21.91 × 10−52.11 × 10−89.70 × 10−18.30 × 10−42.88 × 10−22.09 × 10−55.94 × 10−82.77 × 10−11.22 × 10−3
115.58 × 10−37.47 × 10−22.47 × 10−57.94 × 10−89.75 × 10−12.02 × 10−34.49 × 10−22.29 × 10−52.06 × 10−72.83 × 10−13.80 × 10−3
202.48 × 10−34.98 × 10−22.16 × 10−52.97 × 10−79.72 × 10−12.86 × 10−35.35 × 10−22.37 × 10−53.87 × 10−78.31 × 10−12.67 × 10−3
213.36 × 10−35.79 × 10−22.24 × 10−56.20 × 10−79.75 × 10−11.01 × 10−33.18 × 10−22.19 × 10−54.26 × 10−71.90 × 10−12.18 × 10−3
403.55 × 10−35.96 × 10−22.26 × 10−54.28 × 10−78.38 × 10−12.03 × 10−34.50 × 10−22.29 × 10−55.92 × 10−77.36 × 10−12.79 × 10−3
Nonclustered56.20 × 10−22.49 × 10−12.92 × 10−47.94 × 10−71.054.77 × 10−36.91 × 10−22.09 × 10−36.10 × 10−63.26 × 10−13.34 × 10−2
66.12 × 10−22.47 × 10−12.91 × 10−46.92 × 10−72.33 × 10−11.60 × 10−34.00 × 10−22.09 × 10−36.53 × 10−68.96 × 10−13.14 × 10−2
96.23 × 10−22.50 × 10−12.92 × 10−44.89 × 10−79.93 × 10−12.55 × 10−35.05 × 10−22.09 × 10−36.52 × 10−66.65 × 10−13.24 × 10−2
165.87 × 10−22.42 × 10−12.30 × 10−42.11 × 10−79.90 × 10−19.67 × 10−43.11 × 10−22.09 × 10−35.94 × 10−62.77 × 10−12.98 × 10−2
396.30 × 10−22.51 × 10−12.93 × 10−47.88 × 10−79.94 × 10−12.15 × 10−34.64 × 10−22.09 × 10−36.14 × 10−65.21 × 10−13.26 × 10−2
465.97 × 10−22.44 × 10−12.90 × 10−46.23 × 10−79.76 × 10−15.89 × 10−37.67 × 10−22.09 × 10−35.95 × 10−61.053.28 × 10−2
Table 5. Error values of the second neuro-fuzzy system.
Table 5. Error values of the second neuro-fuzzy system.
Second ANFIS System
ExecutionTrainingValidation MSE ¯
MSERMSEMAEMINMAXMSERMSEMAEMINMAX
Clustered88.71 × 10−39.33 × 10−23.20 × 10−38.03 × 10−53.00 × 10−16.24 × 10−37.90 × 10−24.52 × 10−34.51 × 10−33.96 × 10−17.48 × 10−3
111.02 × 10−21.01 × 10−13.20 × 10−31.04 × 10−42.95 × 10−15.51 × 10−37.42 × 10−24.52 × 10−34.51 × 10−38.19 × 10−17.87 × 10−3
168.65 × 10−39.30 × 10−23.20 × 10−38.95 × 10−52.64 × 10−16.67 × 10−38.17 × 10−24.52 × 10−34.51 × 10−36.64 × 10−17.66 × 10−3
297.21 × 10−38.49 × 10−23.19 × 10−36.51 × 10−52.24 × 10−16.18 × 10−37.86 × 10−24.51 × 10−34.51 × 10−32.18 × 10−15.38 × 10−3
496.93 × 10−38.32 × 10−23.19 × 10−38.68 × 10−52.31 × 10−17.03 × 10−38.39 × 10−24.52 × 10−34.51 × 10−37.69 × 10−16.98 × 10−3
506.44 × 10−38.03 × 10−23.19 × 10−37.18 × 10−52.27 × 10−17.12 × 10−38.44 × 10−24.52 × 10−34.52 × 10−32.78 × 10−16.78 × 10−3
Nonclustered106.56 × 10−38.10 × 10−23.20 × 10−35.56 × 10−52.43 × 10−18.03 × 10−38.96 × 10−23.41 × 10−31.62 × 10−47.73 × 10−17.30 × 10−3
157.38 × 10−38.59 × 10−23.20 × 10−32.07 × 10−52.94 × 10−15.06 × 10−37.12 × 10−23.41 × 10−31.62 × 10−43.66 × 10−16.22 × 10−3
187.25 × 10−38.52 × 10−23.20 × 10−31.96 × 10−62.24 × 10−15.14 × 10−37.17 × 10−23.40 × 10−31.59 × 10−42.18 × 10−16.19 × 10−3
358.16 × 10−39.03 × 10−23.20 × 10−35.38 × 10−52.31 × 10−14.28 × 10−36.54 × 10−23.41 × 10−31.62 × 10−47.59 × 10−16.22 × 10−3
447.50 × 10−38.66 × 10−23.20 × 10−31.42 × 10−52.30 × 10−14.38 × 10−36.62 × 10−23.41 × 10−31.65 × 10−43.03 × 10−15.94 × 10−3
496.66 × 10−38.16 × 10−23.20 × 10−33.58 × 10−52.26 × 10−14.85 × 10−36.96 × 10−23.41 × 10−31.61 × 10−45.15 × 10−15.75 × 10−3
Table 6. General results in the training phase.
Table 6. General results in the training phase.
Fuzzy Inference SystemExecutionTraining
MSERMSEMAEMINMAXPrecision
First System with Cluster61.62 ×10−34.02 ×10−21.91 ×10−52.11 ×10−89.70 ×10−195.98%
First System without Cluster165.87 × 10−22.42 × 10−12.30 × 10−42.11 × 10−79.90 × 10−175.77%
Second System with Cluster297.21 ×10−38.49 ×10−23.19 ×10−36.51 ×10−52.24 ×10−191.51%
Second System without Cluster187.25 × 10−38.52 × 10−23.20 × 10−31.96 × 10−62.24 × 10−191.48%
Table 7. General results in the testing phase.
Table 7. General results in the testing phase.
Fuzzy Inference SystemExecution Tests
MSERMSEMAEMINMAXPrecision
First System with Cluster68.30 ×10−42.88 ×10−22.09 ×10−55.94 ×10−82.77 ×10−197.12%
First System without Cluster169.67 × 10−43.11 × 10−22.09 × 10−35.94 × 10−62.77 × 10−196.89%
Second System with Cluster296.18 ×10−37.86 ×10−24.51 ×10−34.51 ×10−32.18 ×10−192.14%
Second System without Cluster185.14 × 10−37.17 × 10−23.40 × 10−31.59 × 10−42.18 × 10−192.83%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leal-Lara, D.-D.; Barón-Velandia, J.; Molina-Parra, L.-M.; Cabrera-Blandón, A.-C. A Model for Detecting Xanthomonas campestris Using Machine Learning Techniques Enhanced by Optimization Algorithms. Agriculture 2025, 15, 223. https://doi.org/10.3390/agriculture15030223

AMA Style

Leal-Lara D-D, Barón-Velandia J, Molina-Parra L-M, Cabrera-Blandón A-C. A Model for Detecting Xanthomonas campestris Using Machine Learning Techniques Enhanced by Optimization Algorithms. Agriculture. 2025; 15(3):223. https://doi.org/10.3390/agriculture15030223

Chicago/Turabian Style

Leal-Lara, Daniel-David, Julio Barón-Velandia, Lina-María Molina-Parra, and Ana-Carolina Cabrera-Blandón. 2025. "A Model for Detecting Xanthomonas campestris Using Machine Learning Techniques Enhanced by Optimization Algorithms" Agriculture 15, no. 3: 223. https://doi.org/10.3390/agriculture15030223

APA Style

Leal-Lara, D.-D., Barón-Velandia, J., Molina-Parra, L.-M., & Cabrera-Blandón, A.-C. (2025). A Model for Detecting Xanthomonas campestris Using Machine Learning Techniques Enhanced by Optimization Algorithms. Agriculture, 15(3), 223. https://doi.org/10.3390/agriculture15030223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop