1. Introduction
The loss of crops caused by foliar diseases in plants represents a growing challenge to global food security, especially in the context of demographic expansion, which increases the demand for food [
1]. This issue highlights the urgent need to implement sustainable agricultural solutions to mitigate impacts, particularly for small-scale farmers who depend on healthy crop yields and the economic stability generated by their market products [
2,
3].
In traditional agriculture, disease detection is carried out empirically through direct and constant observation, and its treatment largely depends on the use of pesticides [
4]. This approach, besides being slow, costly, uncertain, and unreliable [
5], reduces productive capacity by up to 50% due to the complexity of this task.
Conversely, timely and accurate diagnosis of plant diseases is a key component of precision agriculture [
6], where non-destructive remote sensing methods have been widely used to monitor crops in visible and invisible spectra [
7]. These methods offer novel agricultural solutions to improve and optimize crop yields, constituting a continuously evolving research field [
8]. While not dismissing human expertise in solving complex tasks, this approach shows promise for enhancing crop productivity and protecting the environment automatically [
9], reducing substantial monitoring efforts and enabling the detection of disease symptoms at early stages [
10].
Thanks to advancements in active remote sensing applications and technical diagnostics based on image processing to generate detection algorithms using machine learning [
11], for example, integrated methods have been developed for the detection of diseases in rice crops from smartphone-acquired images, using preprocessing in the hue saturation brightness (HSB) color space to extract the region of interest, perform image segmentation, and extract the features that allow detecting the severity stages of blight disease [
12].
Similarly, ref. [
13] effectively used informative regions of an image to build multiple image classification models through transfer learning, enabling the detection of 21 diseases across 14 fine-grain crops using convolutional neural networks optimized with stochastic gradient descent, achieving an accuracy of 93.05%. These studies demonstrate the effectiveness of deep learning methods for early and robust disease identification based on pattern recognition, such as detecting peach leaf diseases caused by
Xanthomonas campestris using architectures like AlexNet fine-tuned with transfer learning, providing reliable agricultural diagnoses [
14].
Several studies have also explored the use of machine learning algorithms, among which the use of algorithms such as the support vector machine (SVM) [
15], nearest neighbor (KNN), random forest, naïve Bayes [
16], and decision trees [
17], which, in some cases, with the help of architectures such as hybrid-type intelligent algorithms and even optimization techniques such as particle swarm optimization, gradient descent optimizer, and function selection algorithms [
18], have achieved performances between 54.1% and 99.7%.
However, other studies have shown that farmers can improve the precision and efficiency of disease detection and management by employing fuzzy logic techniques in precision agriculture, leading to higher crop yields, since the use of fuzzy logic takes advantage of the flexibility and interpretability of logic systems to handle the uncertainties and inaccuracies inherent in agricultural data [
19]. In this regard, ref. [
20] presented an advanced model for predicting plant leaf disease using an adaptive fuzzy expert system optimized with the cat swarm-based Harris hawks (CSHH) algorithm and data collected via IoT. The method processes leaf images and environmental data, extracting characteristics using patterns to classify diseases with optimized “if–then” rules. Validated with maize, grape, and tomato datasets, the model demonstrated an accuracy of 94.61%, significantly outperforming traditional approaches such as KNN and SVM.
Similarly, ref. [
21] provided a diagnosis of apple black spot using an adaptive neuro-fuzzy system with digital camera images, achieving 89% accuracy. Meanwhile, ref. [
22] used a hybrid fuzzy and k-nearest neighbor (KNN) method to detect diseases and problems in rice plants. In this approach, fuzzy logic determines the membership value of the disease detection class, while KNN identifies the closest distance between evaluated data and its k nearest neighbors in the training data, achieving 98.74% accuracy in tests conducted with 200 cases of 13 diseases and pests.
In this context, this study proposes a foliar disease detection model caused by Xanthomonas campestris, based on the integration of fuzzy inference systems optimized with machine learning algorithms; the objective was to develop a system that is not only accurate in detecting the disease but also interpretable and accessible to end users, allowing informed decision making. This is particularly important in agriculture, where confidence in automated systems depends heavily on the ability of users to understand and verify the results.
For the development of this article, the acquisition of the data, their processing to optimize results, and the configuration of four fuzzy and neuro-fuzzy systems were initially specified, which were subsequently compared in their testing and training stages, allowing to determine the model that provides a faster and more reliable diagnosis based on statistical tests with their respective results.
2. Materials and Methods
2.1. Data Acquisition
The method of detecting plant diseases from images dates back to the 1980s, when one of the first prominent studies was conducted in the USA, where researchers proposed solutions to reduce crop wilt losses by using color infrared photography to detect infections in soybean crops [
23]. Early applications involved pattern detection algorithms that combined remote sensing with symptom-based diagnostic techniques, yielding accurate and reliable results [
11].
To develop the proposed model, a dataset of 1471 images was compiled covering 15 plant species susceptible to infections by various subspecies of Xanthomonas campestris and commonly grown on a large scale. These species include banana (Musa x paradisiaca), radish (Raphanus sativus), walnut (Juglans regia), tomato (Solanum lycopersicum), soybean (Glycine max), pumpkin (Cucurbitaceae), plum (Prunus domestica), pepper (Capsicum annuum), peach (Prunus persica), mango (Mangifera indica), hazelnut (Corylus avellana), cabbage (Brassica oleracea), broccoli (Brassica oleracea var. italica), cauliflower (Brassica oleracea var. gemmifera), and bean (Phaseolus vulgaris).
These species were selected because they are essential crops in world agriculture, as many of them play a key role in food security due to their high production, nutritional value, and adaptability to diverse regions and climatic conditions. Therefore, the images used represented both healthy and diseased leaves under different environmental conditions, including variations in lighting, angles, shadows, and brightness levels, simulating real-world agricultural scenarios.
In healthy conditions, the leaves of these species are predominantly green, which facilitates the early detection of disease symptoms, provided the affected foliar area exhibits between 10% and 15% visible symptoms. Moreover, the diversity in the shape, size, and texture of the leaves allows the model to identify specific patterns associated with infection, enhancing its accuracy and robustness in analyzing a wide range of foliar morphologies in diverse agricultural contexts.
The dataset used includes images of both healthy and diseased leaves (see
Figure 1) captured with a digital camera under controlled conditions, varying in terms of lighting, angles, shadows, and brightness. Additionally, images from the public PlantVillage dataset (
https://plantvillage.psu.edu/plants, accessed on 15 January 2022) were incorporated, which proved particularly valuable due to its unique characteristics, such as standardized backgrounds, consistent sharpness conditions, and broad representativeness of crop features [
24]. These attributes not only strengthen the dataset’s quality but also improve the model’s generalization by including diverse scenarios. Furthermore, PlantVillage is a publicly available resource widely used in agricultural research, facilitating the development of studies based on advanced image processing techniques [
25].
2.2. Preprocessing
The pattern recognition focused on regions with specific symptoms, including small, irregularly shaped spots (1–5 mm in diameter) with black veins and necrotic centers on leaves and stems. Additional symptoms documented by [
26] include yellow halos and brown lesions visible from the early stages of infection.
During image collection, specific segments were cropped in various sizes (see
Figure 2) to ensure that each input to the model excluded backgrounds that could confuse or overwhelm it with unnecessary information, thus optimizing the training process. The images were stored in the RGB color model to preserve variations in lighting, contrast, angle, and shadow, enabling the model to recognize different leaf characteristics at various times of the day. This approach allows the model to assess a color spectrum associated with disease symptoms in infected leaves, evaluating each pixel and identifying colors corresponding to the disease’s natural presentation.
Studies such as [
27] have suggested that preprocessing images used for training machine learning models can significantly enhance the visual quality of input images, optimizing color analysis of disease symptoms and, consequently, improving diagnostic accuracy. Thus, following the acquisition and cropping of images, a color transformation was performed on each image, converting from the RGB model to the normalized HSB model. This change was necessary because RGB values are highly sensitive to variations in lighting conditions, contrast, and shadows, which can introduce significant noise and hinder the accurate identification of disease-related patterns [
4]. The HSB model, unlike RGB, addresses these limitations by decomposing color into more interpretable components: hue, which defines the chromatic nature (for example, red, yellow, or brown); saturation, which allows differentiation between advanced symptoms (intense colors) and initial symptoms (less saturated colors); and brightness, which reflects the luminosity of the color. This approach enhances the robustness of the model by mitigating the impact of variable lighting conditions through the establishment of thresholds, as shown in
Figure 3, where the assignment of a color scale based on yellow, red, and brown tones allowed for the identification of disease symptoms across the different species evaluated.
After processing, the input model enhances feature analysis for training by utilizing various color spaces and parameters. This approach enables the system to distinguish plants without the characteristic disease spots and whose color falls within the range assigned to healthy leaves. Additionally, it improves the model’s ability to recognize patterns associated with the disease, supporting more accurate decision making in identifying healthy versus diseased leaves.
2.3. Model Configuration
Machine learning models are extensively utilized to extract critical crop parameters for prediction. For example, ref. [
28] developed a neural network comprising an input layer, a membership function layer, a rule layer, and an output layer that can predict crop yield sustainably. In configuring the proposed model, two systems are holistically applied and characterized by precision and interpretability.
These systems address imprecise problems for which solutions are often complicated or even impossible to find [
29]. The first system is a Sugeno-type inference system for classification. The second is an adaptive neuro-fuzzy inference system (ANFIS) executed on the training data to enhance the model’s accuracy and generalization capacity. Additionally, an intelligent hybrid optimization mechanism and the interior point algorithm are incorporated to improve problem-solving capabilities and overall model performance, effectively combining the advantages of fuzzy logic with machine learning.
2.3.1. Sugeno Fuzzy System Configuration
The Sugeno fuzzy system addresses classification problems through fuzzy rules with output functions that are typically linear or constant, enabling interoperability and optimal results in applications [
30]. In the proposed model, an initial Sugeno system was developed to detect, at the pixel level on the HSB scale, the specific features associated with the color tone of
Xanthomonas campestris disease. This output allows a second system with similar characteristics to make decisions regarding the leaf’s condition based on the overall results of an image.
The configuration of this first system relies on three fuzzy sets for the input, each structured with three membership functions representing low, medium, and high values on the HSB scale (where H corresponds to hue, S to saturation, and B to brightness). A default linear function is defined for the output, facilitating the modeling of the relationship between input values and system response. This optimization enhances the inferential process and improves decision-making accuracy based on the analyzed visual characteristics.
To compare the accuracy of the systems, configurations were tested both with and without clustering, which identifies groups with similar characteristics within a dataset, aiding in segmentation and analysis, as demonstrated by [
31]. In systems utilizing fuzzy clustering, rules are automatically defined, and three fuzzy clustering rules are executed, ensuring that each input is uniformly related to the outputs, with a maximum of 27 rules. For the system implemented without fuzzy clustering, an exploratory analysis of the different HSB values and their significance for identifying healthy or diseased pixels was conducted based on the following rules:
If the value of H is low, S is low, and B is low, then it is healthy;
If the value of H is low, S is medium, and B is low, then it is healthy;
If the value of H is low, S is medium, and B is medium, then it is healthy;
If the value of H is low, S is medium, and B is high, then it is sick;
If the value of H is medium, S is medium, and B is high, then it is sick;
If the value of H is low, S is high, and B is high, then it is sick;
If the value of H is medium, S is high, and B is high, then it is sick.
Table 1 describes the settings made for Sugeno inference systems without clustering. It lists the value given to each one within the MATLAB sugfis function, using MATLAB version R2023b.
For those systems implemented using fuzzy clustering, the following configuration is the one observed in
Table 2. These parameters, in turn, were entered into the MATLAB genfis function.
To finalize the configuration of this first system, three neuro-fuzzy sets were utilized for the input, corresponding to different pixel-level values in the HSB color scale. Each set includes three membership functions, consistent with the configuration defined in the clusters, and an output set that determines the pixel’s state (healthy or sick) based on its structure, as shown in
Figure 4. The Rule layer, represented by blue circles, integrates the input membership functions using logical operations (e.g., AND, OR) to evaluate the fuzzy rules and generate corresponding outputs. The black circles symbolize crisp (exact) values for both the inputs and the final output, while the white circles represent intermediate fuzzy values, such as degrees of membership for the inputs and outputs. This framework enhances the interpretation and classification of the data, facilitating pattern identification indicative of crop health and ensuring an effective response to variations in color values.
Figure 5 presents the rules and their correspondence with the membership functions. This representation illustrates how the rules establish the relationship between inputs and outputs, reflecting the results through the linear function based on the different values that the input membership functions can assume (HSB values). Additionally, the output neuro-fuzzy surface is visualized, depicting in a three-dimensional graph how variations in the inputs—individually or in combination—affect the system’s output. In this graph, the axes represent the input variables, while the vertical axis shows the corresponding output value, providing a clear understanding of the system’s behavior within the sample space.
2.3.2. Adaptive Neuro-Fuzzy Inference System (ANFIS) Configuration
Based on the average (avg) results obtained at the pixel level from the first system, the second system’s configuration considers the model’s capacity to determine whether the disease Xanthomonas campestris is present in the entire image. This system comprises five membership functions representing low, medium-low, medium, medium-high, and high values. Similar to the first system, a linear function is assigned for the output by default.
In this case, configurations with and without clustering were also executed. For the systems utilizing fuzzy clustering, rules are defined as in the first system; however, five fuzzy clustering rules are executed. For the system implemented without fuzzy clustering, an exploratory analysis of the different HSB values and their significance in identifying healthy or diseased pixels was conducted based on the following rules:
If the avg value is high, then it is healthy;
If the avg value is medium-high, then it is healthy;
If the avg value is medium, then it is healthy;
If the avg value is medium-low, then it is sick;
If the avg value is low, then it is sick.
Neuro-fuzzy inference systems are created after developing fuzzy inference systems, which combine artificial neural networks and fuzzy logic to facilitate decision making based on imprecise or uncertain information. This capability stems from their sensitivity to the definition of membership functions [
32]. Additionally, this system employs a hybrid learning procedure that builds an input–output map based on human knowledge, effectively describing how values can belong to different categories, thereby considerably reducing modeling time [
33].
For its configuration, a maximum limit of 600 training phases was established, and, similar to the fuzzy inference system, configurations with and without clustering were executed under the same parameters. This system used a backpropagation optimization method for the input, while least squares were applied for the output. Validation was performed with various input and expected output test values, which helped avoid overfitting in each case.
Finally, the neuro-fuzzy configuration of the second system is based on an input set that represents the average of the values obtained from an image previously evaluated by the first system. This system’s structure utilizes a single input set with five membership functions, relating the input data to the output, as shown in
Figure 6. The black dots represent the crisp (exact) values for the input and output, while the white dots correspond to intermediate fuzzy values, such as the degrees of membership of the input data and the output state. This configuration provides a rapid and accurate diagnosis of the leaf’s condition regarding the presence of
Xanthomonas campestris, facilitating early disease identification and contributing to more efficient crop management.
2.4. Optimization
To minimize information loss in the images and ensure the most accurate results from a rapid diagnosis, the model was optimized by implementing a hybrid algorithm. The parameters of this algorithm are specified in
Table 3 and were established to enable the model to identify candidate points from healthy pixels. This was achieved by dilating the candidate points twice without utilizing the pixels obtained from an interior point algorithm after the genetic algorithm execution.
3. Results
Once the variable selection process and the configuration of the neuro-fuzzy systems were defined, four models were developed: two for a system that evaluates the state of the image from pixel-level input on an HSB scale, using either fuzzy clustering in one case or defined rules in the other. The remaining two models determine the diagnosis of the leaf based on the average of all pixels in an image processed by the first system, with established ranges for diseased (0.3884 to 0.54) and healthy (0.55 to 0.70) states.
Each model was run 50 times, utilizing 70% of the dataset for training and the remaining 30% for testing. In the first validation, the results were compared in terms of maximum, minimum, mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) for the training and validation datasets. These results are summarized in
Table 4 and
Table 5, highlighting the executions with the lowest squared error and identifying the best-performing execution in each case. It is important to note that some models may converge to local minima rather than reaching the global minimum.
To determine the system with the best accuracy and performance, particularly when using clusters, the results were evaluated in terms of maximum and minimum errors, mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) in the training and validation datasets. Additionally, a Kruskal–Wallis test was conducted to compare the medians of the data groups and assess whether the samples originated from the same population. Finally, a confusion matrix was utilized for the best-performing model to quantitatively evaluate its performance, aiding in identifying errors across all predictions.
3.1. Training
During training, images exhibiting the most recognizable disease patterns were selected to avoid redundancy and confusion that could adversely affect the resulting diagnosis. This approach ensured that the learning process was based on representative and diverse examples. Subsequently, 50 runs of the two systems were conducted, yielding the results summarized in
Table 6, which presents the statistical values and highlights the run that achieved the best outcome.
To determine the accuracy with which cluster-based systems make predictions in relation to actual outcomes and to evaluate the performance of their configurations, their results were analyzed considering simulated and real values. This analysis made it possible to identify potential error patterns, points of confusion in classifications, and assess model efficiency, highlighting areas for improvement and adjustments during training to optimize performance and ensure reliable diagnostics.
In the case of the first system, during the training phase, the expected data for a healthy pixel showed a value close to 0.62, while the simulated values for diseased pixels approximated 0.3884. Based on this, the mean squared error (MSE) recorded was 1.62 × 10−3, indicating minimal differences between the simulation and expected values. This suggests that numerically, error metrics such as MAE and RMSE being low are clear indicators of a well-calibrated predictive model. Although certain iterations may yield results significantly different from real values, these appear to be sporadic and do not significantly impact the overall stability of the model.
On the other hand, the performance of the second system during the training phase was evaluated by comparing the average of the values generated by the first system and determining the error based on the prediction of the plant’s condition. In this case, an MSE of 7.21 × 10−3 was recorded, indicating no large recurring errors, and most simulated values remain close to the actual ones. This suggests that the model not only follows trends effectively but also exhibits low error dispersion, maintaining a low average error rate and robustness against anomalies.
3.2. Test
For the test phase, images were randomly selected from healthy samples and those infected with
Xanthomonas campestris that had not been used in model training. This approach was taken to more accurately verify the model’s ability to recognize previously unseen patterns. As in the training phase, 50 runs were conducted, resulting in the statistical data summary presented in
Table 7 and providing an approximation of the actual accuracy of the systems.
As in the training phase, the testing phase also evaluated the accuracy of the cluster-based systems’ predictions in relation to the results. To assess the performance of their configurations, their results were analyzed considering simulated and actual values.
For the first system, the expected value for healthy pixel data remained consistent, being close to 0.62. Meanwhile, the behavior resulting from comparing simulated and expected values for pixels representing diseased states approximated 0.3884. In this phase, the mean squared error (MSE) graph indicates a value of 8.30 × 10−4, showing better performance compared to the training phase. However, the simulated data exhibited significant variability, representing the potential detection of uncommon events. This variability suggests the model’s sensitivity to factors affecting performance, demonstrating potential for adjustment and improved accuracy in future iterations.
Finally, the performance of the second system was evaluated by comparing the simulated results with actual values. In this case, the MSE, equivalent to 6.18 × 10−3, indicates that although there were some significant differences between the actual and simulated values at specific points, the mean of the squared differences remained relatively small. This means that the simulated values are not far from the actual ones, despite some variations, consolidating a reasonably good model. Additionally, the low MAE suggests that, in absolute terms, the average differences between simulated and actual values are small. While the model does not always predict specific points with precision, the deviations are generally minor and fall within an acceptable range, which is a positive aspect of the model.
Overall, the system in the testing phase performed well in capturing the general trend of the actual data. The average error was small, meaning that although the system cannot perfectly follow abrupt changes or exceptional events, it does not make extremely large errors. These results highlight the robustness of the model and its ability to generalize, which is crucial for its application in real-world agricultural diagnostic scenarios.
3.3. Validation of the Best Model
Based on the results obtained, the first and second neuro-fuzzy systems with clustering were identified as the best-performing models. To validate this conclusion, a Kruskal–Wallis test was conducted. This test, which does not require the data to follow a normal distribution, provides greater versatility for diverse data types by comparing the medians of data groups to determine if the samples originate from the same population, verifying equidistribution. In other words, it assesses whether, after applying optimization algorithms, the model results for the training data and their expected values belong to the same population.
The test was conducted at a 0.05 significance level and yielded a p-value of 0.646. Since this value is above the threshold, we conclude that the samples are equidistributed and belong to the same distribution. Thus, the null hypothesis cannot be rejected, suggesting that the model employing fuzzy clustering produces consistent and reliable results.
3.4. Overall Model Performance
After achieving optimal results with the clustered neuro-fuzzy inference model, enhanced by a hybrid intelligent algorithm, the model’s configuration was evaluated using a confusion matrix applied to the entire dataset, as shown in
Figure 7. This matrix demonstrates that the model attained an overall precision of 92.34%, a sensitivity of 95.28%, a specificity of 92.40%, and an accuracy of 93.81%. These metrics underscore the model’s effectiveness in detecting the disease, with a high degree of reliability in its predictions. The high sensitivity indicates the model’s strength in accurately identifying positive cases, while the specificity highlights its competence in correctly classifying negative cases.
4. Discussion
In recent years, research focused on artificial vision and pattern recognition has shifted towards automating the disease detection process in crops [
34], which has led to the development of systems capable of identifying leaf diseases with less manual intervention [
35] and able to diagnose a variety of crop diseases accurately and quickly, providing reliable and fast results through computerized detections [
36]. This advancement is especially valuable in crops susceptible to foliar diseases such as blight, which can spread rapidly, causing plant mortality and significantly reducing agricultural productivity [
37].
The results obtained in this study highlight the potential of combining fuzzy logic with neural networks, demonstrating that their use is not only effective for modeling complex systems with uncertainty but also for adapting to different agricultural scenarios. This adaptability is due to their ability to integrate environmental factors such as light variations, which facilitates early detection of diseases like Xanthomonas campestris even in the initial stages. Furthermore, the configurability of the ANFIS model allows its application in both crop-specific diseases and multiple species, showing consistent results across various scenarios.
In comparison with other advanced architectures, such as DenseNet (average accuracy of 97%) [
38] and advanced multitask networks like VGG16 (accuracy of up to 98.75%) [
39], the ANFIS model offers a more interpretable and adaptable alternative. This advantage is particularly relevant in crops with limited historical data, where deep learning techniques require extensive retraining, thus increasing both costs and implementation times.
This approach is particularly relevant for small-scale agriculture, where access to advanced technologies and diagnostic experts is limited, while ANFIS provides a more accessible and sustainable solution that can integrate with low-cost technologies such as basic remote sensors, representing a significant advance in the automation of crop monitoring. Additionally, the use of images as input facilitates its implementation in real-time monitoring systems, as the model demonstrated a diagnostic accuracy of 93.8%, significantly increasing the reliability of crop monitoring and providing farmers with an accessible tool for rapid intervention, substantially reducing their losses.
This advancement particularly benefits empirical farmers, as it reduces their dependence on experts for crop care and monitoring, optimizing decision making and contributing to a more efficient resource management, which can improve agricultural productivity.
However, it is important to note that, although the model showed high performance under the evaluated conditions, its effectiveness may be affected by factors such as image quality and extreme variations in environmental conditions. Furthermore, while 15 different species were used in the dataset, it would be beneficial to expand the species variety to explore the model’s performance across a wider range of crops.
Finally, it is important to emphasize that machine learning-based techniques have considerable potential to be used for the dual purpose of increasing crop yield and reducing pesticide use, especially as the global population continues to grow. Therefore, it is essential to continue working on various research avenues that address these challenges. Future research could focus on integrating this model into mobile platforms or drones for field use, which would allow farmers to monitor crops more efficiently and accurately. It may also be possible to explore the integration of this approach with other monitoring systems based on sensors, such as humidity or temperature, to provide a more complete and accurate diagnosis of crop conditions.
5. Conclusions
This study demonstrated the effectiveness of clustered neuro-fuzzy inference models for detecting the disease caused by Xanthomonas campestris in crops. By incorporating fuzzy clustering, two optimized neuro-fuzzy systems identified as the best models were implemented, and the model showed a strong capacity to generate accurate and reliable predictions.
This model is generally based on pattern recognition and digital image processing, capturing data from various sources, including specialized and RGB model images, under random lighting, brightness, and contrast conditions. By subsequently applying the HSB color model and cropping images into specific sections, it provides a diagnosis of the leaves, achieving an accuracy of 93.81%, 92.34% precision, 95.28% sensitivity, and 92.40% specificity. This performance allows users to access a reliable model capable of optimizing the care process and enhancing the productivity of 15 crop species.
Beyond accuracy, the model’s interpretability is a crucial feature of this research. Unlike other deep neural network approaches, the fuzzy inference system allows users to understand the decision-making processes within the model. This transparency is essential for building farmer confidence in the technology and facilitating its adoption.
The study highlights the adaptability of the model across various crop species and its ability to handle diverse environmental conditions. This feature suggests potential for integration into precision agriculture tools like drones and IoT devices, enabling real-time disease monitoring and significantly reducing manual intervention.
By providing an interpretable and accessible diagnostic model, the approach empowers farmers to make informed decisions. This leads to more effective resource management, including optimized pesticide application, contributing to sustainable agricultural practices and reduced environmental impact
However, the study also identified certain limitations that should be addressed in future work. For example, the model’s reliance on high-quality images and the need for extensive preprocessing may limit its applicability in settings where these resources are not readily available. Therefore, to maximize its impact, solutions should be developed that can integrate with other advanced agricultural technologies, such as drones and real-time sensors, to provide farmers with more holistic and effective responses.