Next Article in Journal
Genetic Diversity Patterns Within and Among Varieties of Korean Italian Ryegrass (Lolium multiflorum) and Perennial Ryegrass (Lolium perenne) Based on Simple Sequence Repetition
Previous Article in Journal
Unlocking the Power of Eggs: Nutritional Insights, Bioactive Compounds, and the Advantages of Omega-3 and Omega-6 Enriched Varieties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interpretable LAI Fine Inversion of Maize by Fusing Satellite, UAV Multispectral, and Thermal Infrared Images

1
College of Land Science and Technology, China Agricultural University, Beijing 100193, China
2
Precision Agriculture Lab, School of Life Sciences, Technical University of Munich, 85354 Freising, Germany
3
Key Laboratory of Remote Sensing for Agri-Hazards, Ministry of Agriculture and Rural Affairs, Beijing 100193, China
*
Author to whom correspondence should be addressed.
Submission received: 23 December 2024 / Revised: 17 January 2025 / Accepted: 22 January 2025 / Published: 23 January 2025

Abstract

:
Leaf area index (LAI) serves as a crucial indicator for characterizing the growth and development process of maize. However, the LAI inversion of maize based on unmanned aerial vehicles (UAVs) is highly susceptible to various factors such as weather conditions, light intensity, and sensor performance. In contrast to satellites, the spectral stability of UAV-based data is relatively inferior, and the phenomenon of “spectral fragmentation” is prone to occur during large-scale monitoring. This study was designed to solve the problem that maize LAI inversion based on UAVs is difficult to achieve both high spatial resolution and spectral consistency. A two-stage remote sensing data fusion method integrating coarse and fine fusion was proposed. The SHapley Additive exPlanations (SHAP) model was introduced to investigate the contributions of 20 features in 7 categories to LAI inversion of maize, and canopy temperature extracted from thermal infrared images was one of them. Additionally, the most suitable feature sampling window was determined through multi-scale sampling experiments. The grid search method was used to optimize the hyperparameters of models such as Gradient Boosting, XGBoost, and Random Forest, and their accuracy was compared. The results showed that, by utilizing a 3 × 3 feature sampling window and 9 features with the highest contributions, the LAI inversion accuracy of the whole growth stage based on Random Forest could reach R2 = 0.90 and RMSE = 0.38 m2/m2. Compared with the single UAV data source mode, the inversion accuracy was enhanced by nearly 25%. The R2 in the jointing, tasseling, and filling stages were 0.87, 0.86, and 0.62, respectively. Moreover, this study verified the significant role of thermal infrared data in LAI inversion, providing a new method for fine LAI inversion of maize.

1. Introduction

Maize occupies a unique and important position in the modern agricultural pattern. It is not only a key food crop widely planted globally but also a characteristic agricultural product that meets people’s diverse dietary needs [1]. The key nodes in the growth and development process of maize (from the emergence stage, jointing stage, trumpet stage, filling stage, etc.) and its cultivation and management measures all rely on the number and coverage of leaves as an important basis [2]. Leaf area index (LAI) is a key indicator that intuitively reflects the above-mentioned maize growth process. However, it is greatly affected by the interaction of various environmental factors such as soil fertility, water status, light intensity, and temperature changes. In order to accurately guide maize production, it is challenging to perform fine inversion at the plot scale [3,4]. Therefore, the fine analysis of the remote sensing performance of the leaf coverage and growth vitality of maize at different growth stages is currently a research difficulty [5].
With the rapid development of remote sensing technology, a new opportunity has emerged for large-area, rapid, and non-destructive monitoring of LAI [6]. In the process of LAI inversion, regarding the remote sensing data sources used, they can be mainly divided into: LAI inversion methods based on optical remote sensing, LAI inversion methods fusing multi-modal remote sensing images, and LAI inversion methods fusing multi-scale remote sensing images. The models used usually include empirical models, physical models, and machine learning models [7].
The LAI inversion method based on optical remote sensing analyzes the reflection characteristics of vegetation in different spectral bands and the differences in calculated vegetation index characteristics to establish a relationship model with LAI. This method is relatively convenient and low-cost in data acquisition and is therefore widely used [8,9]. For example, Pradosh Kumar Parida et al. [10] estimated the yield of maize by inverting the LAI of maize based on a UAV multispectral sensor and regression analysis. Anting Guo et al. [11] inverted the LAI of maize using a Gaussian process model and hyperspectral data, with an R2 of 0.86. However, the accuracy of this method is easily affected by factors such as weather, the spectral used, and spectral resolution. Therefore, it is necessary to perform preprocessing and feature selection before LAI inversion [12]. The Pearson correlation coefficient is usually used to compare the correlation coefficients of different features for the target variable [10], but this method requires that the distribution of feature variables be approximately normal and is not very suitable for features with nonlinear relationships [13].
The LAI inversion method fusing multi-modal remote sensing images fully integrates the unique advantages of various remote sensing data such as optical and LiDAR, makes full use of the complementarity of each modal data, and effectively copes with the limitations of a single modal data [14]. Optical images can provide rich spectral information and reflect the physiological and biochemical characteristics of vegetation; radar images are sensitive to the structure and water content of vegetation, can penetrate clouds to obtain data, and are not easily affected by weather. Han Ma et al. [15] inverted the LAI using a multiple linear regression model based on airborne LiDAR, MODIS, and MISR data, combining these structural parameters with canopy height and measured data, with the highest R2 reaching 0.73. This type of method can give play to the advantages of multi-modal data, but the acquisition difficulty and cost of individual data are relatively high. Thermal infrared images with relatively low image acquisition costs can be used to invert LAI [16]. Thermal infrared data can reflect the temperature status of vegetation and are closely related to physiological processes such as vegetation transpiration and water stress. It is often used for the inversion of crop water indicators [17]. LAI is closely related to vegetation transpiration [18]. Therefore, it is worth exploring whether thermal infrared images can provide effective support for inverting the LAI of maize by reflecting the difference in canopy temperature.
The LAI inversion method fusing multi-scale remote sensing images aims to combine remote sensing images with different spatial resolutions to achieve a comprehensive grasp from fine information to macroscopic characteristics. At the same time, low spatial resolution images usually have a large width and relatively stable spectral information [19]. The fusion of multi-scale remote sensing images will endow high spatial resolution images with more stable spectral characteristics. Taifeng Dong et al. [20] used the STARFM and ESTARFM models to fuse Landsat-8 OLI and MODIS data for the inversion of leaf area index and then further used the data assimilation method for biomass inversion, proving that the fusion algorithm can enhance the potential of remote sensing technology for crop growth monitoring. However, for the fine monitoring of maize LAI at the plot scale, this type of satellite image fusion method is obviously not applicable [21]. UAVs with high spatial resolution are often used for LAI monitoring at the plot scale to distinguish the growth differences within the field [11].
However, the quality of UAV images is easily affected by weather conditions, light intensity, flight sorties, sensor performance [22], and image processing software. When the monitoring area is large and the flight time is long, the phenomenon of “spectral fragmentation” is prone to occur. Even after radiation correction in the later stage, the overall spectral quality of the image may still be poor [19], and the detailed information of the inverted LAI may be difficult to reflect. Satellite remote sensing, with its wide coverage and periodic observation capabilities, has become the main means of obtaining vegetation information at the regional and even global scales [23]. For the plot scale, although its spatial resolution is not sufficient to accurately capture the subtle changes within the plot, the consistency and quality of its images are relatively more stable, and the spectral information is usually more abundant [24]. Therefore, the advantages of satellite and UAV remote sensing data can be combined through remote sensing data fusion methods. In current research, most of them focus on the remote sensing data fusion between satellite images with different spatial and temporal resolutions, which is difficult to meet the fusion needs of image pairs with large spatial scale differences such as UAV-satellite [25]; forced data fusion will instead cause the loss of detailed information in UAV images. It is necessary to explore suitable remote sensing data fusion methods to give full play to the advantages of satellite and UAV remote sensing data.
In summary, there are still some problems to be solved in the fine inversion of maize LAI at the plot scale at the current stage [26]. First, most studies rely on a single remote sensing data source and do not fully explore the complementary advantages of multi-source data [27]. Although some studies have attempted data fusion, the methods are mostly simple superposition or combination based on empirical formulas, which fail to effectively integrate the spatial and spectral information of different data sources, and there is still a large room for improvement in inversion accuracy [28]. Second, there is a lack of systematization and pertinence in feature selection and machine learning model optimization. The models lack interpretability, which easily leads to feature redundancy or the lack of key features [29]. At the same time, the selection of hyperparameters of machine learning models mostly depends on experience or trial-and-error methods, making it difficult to find the optimal configuration, which affects the model performance. The fine inversion of maize LAI at the plot scale requires the exploration of methods with high spatial resolution, accuracy, and interpretability.
Therefore, this study carried out LAI field measurements and UAV remote sensing monitoring in a maize farm in Shenmu City, Shaanxi Province, proposed a two-stage remote sensing data fusion method for UAV and satellite multispectral images, improved the interpretability of the inversion model based on the SHAP model, explored the contribution of thermal infrared images to the fine inversion of LAI, and finally compared the capabilities of cutting-edge machine learning algorithms for achieving accurate monitoring of maize LAI at the plot scale, with the aim of providing new methods for remote sensing image fusion and fine inversion of agricultural situation parameters.

2. Materials and Methods

2.1. Study Area

This study was located in Shenmu City, Shaanxi Province, China (Figure 1), which is in the transition zone between the loess hilly region and the Inner Mongolia grassland, presenting a unique geographical and ecological landscape. The climate is a continental arid and semi-arid climate, with the same period of rain and heat. The annual average temperature is 8.9 °C, the annual average precipitation is 396 mm, and the annual average evaporation is as high as 1790 mm. The soil type is mainly aeolian sandy soil, with poor structure and low organic matter. Therefore, as an energy city mainly engaged in coal resource development, Shenmu City has carried out a lot of work in agricultural production, ecological protection, and environmental governance. There are some fallow plots in the study area, and the actual cultivated area is about 80 mu. After tillage and basal fertilizer treatment, maize was sown in different periods (Figure 1c), the numbers in the figure indicate the sowing order, and the area shown as “0” is the farmer’s planting management, and the sowing date is the earliest. Since there are many sowing dates designed in the experimental field, the growth stages mentioned in the paper are the growth stages of maize sown in the third period, the numbers on each plot in Figure 1d represent the area of the plot (The unit is mu.), and water and fertilizer integrated equipment was installed to provide necessary water and fertilizer support for the growth of maize.

2.2. Data Acquisition and Processing

2.2.1. Remote Sensing Data

This study used satellite image data from the Sentinel-2 mission. (Figure 1c shows the false color image of the study area based on Sentinel-2 image) [30]. Sentinel-2 is a high-resolution multispectral imaging satellite. When the 2A and 2B satellite systems complement each other, the revisit period is 5 days, and the highest spatial resolution can reach 10 m. Based on the GEE (Google Earth Engine) platform [31], the Sentinel-2 surface reflectivity product with the closest acquisition time to the UAV image and without clouds was downloaded. Given that the spatial resolutions of the various bands of Sentinel-2 data are different, for example, the spatial resolution of the red edge band (B6) is 20 m, the nearest neighbor interpolation method was used to resample the processed bands to a 10 m resolution. Table 1 shows the relevant information of the Sentinel-2 data used in the paper.
This study used the DJI Mavic 3 Multispectral UAV (DJI M3M). (Figure 1f shows the DJI M3M product. Figure 1d is the pre-processed true—color image of the study area taken by this UAV.), which can collect accurate multispectral data. The imaging system of this UAV is equipped with 1 visible light lens and 4 multispectral lenses. The lens information and image acquisition times are detailed in Table 1. A total of four growth stages of UAV images were collected in this study: they are jointing stage, tasseling stage, early filling stage, and late filling stage, respectively. During the UAV image acquisition process, shooting was carried out from 9:00 to 15:00 on sunny, windless, and cloudless days. The total duration of a single flight is generally about 1 h. The flight height of the UAV was set to 120 m, the side overlap was 80%, and the forward overlap was 80% to ensure accurate mosaic of the images. The DJI Terra software (The version number is 4.2.5.) was used to process and mosaic the UAV images, and the final spatial resolution of the obtained image was 4.6 cm.
The DJI Mavic 3 Thermal UAV (DJI M3T, Figure 1g) is the thermal infrared UAV used in this paper. The data acquisition time, flight height, and overlap settings were the same as those of the DJI M3M UAV. The camera resolution is as high as 640 ∗ 512, and the actual spatial resolution is 15 cm. The DJI M3T has a wide-angle telephoto camera and a thermal infrared camera. Since the thermal infrared image captured by the DJI M3T is in R-JPEG format, the temperature information of a certain point or area can be extracted using the DJI Thermal Analysis Tool 3, but the raster value is not the temperature value. Therefore, we used Python (The version number of Python used for both image processing and model development is 3.10.16.) to connect to the DJI TSDK interface to convert the temperature value and extract the POS information, then used DJI Terra to mosaic the temperature image to finally obtain the field canopy temperature data in Tiff format and at the centimeter level.
Since the spatial resolution of the UAV image is relatively high and its role in guiding agricultural production is limited, and the computational complexity is relatively high, all remote sensing images in this study were resampled to 0.5 m; in other words, the spatial resolution of the finally inverted LAI product was also 0.5 m. To learn the mapping relationship between the UAV multispectral image and the Sentinel-2 image and extract the spectral information of various ground objects in the satellite image, the UAV multispectral image was also resampled to 10 m.

2.2.2. Ground Data

While obtaining the UAV images, the field-measured sample data were collected (only ground data from the first three growth stages were collected), including the leaf area index (LAI) of maize in the study area and the GPS data of the points. The tools used were a tape measure and an RTK i90, and the data measurement process is shown in Figure 1e. The growth stages covered by sampling were the jointing stage, the tasseling stage, and the early filling stage. To ensure that the collected sample points are representative, before sampling, the study area was divided based on the vegetation index calculated from the Sentinel-2 satellite image, and then stratified sampling points were set according to the area of each partition [32]. The positions of the sampling points are shown in Figure 1d. After arriving at the sampling point, the maize that best represented the growth of this area was selected. The positional information was obtained using RTK. After cutting off the maize close to the ground, all the leaves were separated and classified as folded and unfolded leaves. The length and width of all green leaves were measured, and the LAI was calculated according to Formulas (1) and (2) [33].
S o n e _ p l a n t = i = 1 n u n f o l d _ l i × u n f o l d _ h i × 0.75 + j = 1 m f o l d _ l j × f o l d _ h j × 0.5
L A I = S o n e _ p l a n t × N U n i t _ l a n d _ a r e a / S U n i t _ l a n d _ a r e a
where S o n e _ p h a n t represents the leaf area of a single plant, u n f o l d _ l and u n f o l d _ h represent the length and width of the unfolded leaves, respectively; f o l d _ l and f o l d _ h represent the length and width of the folded leaves, respectively; n and m represent the number of unfolded and folded leaves of a single maize plant, respectively; N U n i t _ l a n d _ a r e a represents the number of maize plants per unit land area; and S U n i t _ l a n d _ a r e a represents the unit land area. The distribution and basic statistical information of the LAI values collected in the study area are shown in Figure 2. The LAI value of maize in the study area increases with the growth stage. The LAI in the filling stage is generally at a high level with a wide data distribution range, mainly distributed between 3 and 5. The tasseling stage is the second, and the data dispersion is the smallest. The LAI value in the jointing stage is relatively low and the distribution is relatively scattered, with a mean value of only 1.73. This data distribution is in line with the actual growth situation of maize in the study area.

2.3. Two-Stage Remote Sensing Data Fusion Method

To retain high resolution while obtaining rich and consistent spectral information, this study fused the UAV multispectral image with the corresponding bands of Sentinel-2. However, the prerequisite for remote sensing image data fusion is that the images to be fused have the same spatial attributes, that is, the same coordinate system, the same number of raster rows and columns, and a one-to-one correspondence in spatial positions. Therefore, it is necessary to first resample the Sentinel-2 image to the same spatial resolution as the UAV image, then use the same vector data mask to extract the images to be fused, and finally resample them to the spatial resolution of the Sentinel-2 image.
The two-stage remote sensing data fusion method proposed in this study is divided into two parts: coarse fusion and fine fusion (Figure 3 (Step 1)). Coarse fusion converts the spectral range of the UAV images obtained in multiple sorties and at different times to that of the Sentinel-2 image, with more stable and consistent spectra, absorbing the richer spectral information of the satellite image.
The corresponding bands of the processed UAV image and the Sentinel-2 image are extracted row by row and column by column to construct a remote sensing data coarse fusion dataset. The spectral value of the UAV image is used as the independent variable, and the spectral values of the satellite image is used as the target variable. Ten machine learning, statistical learning, and ensemble learning methods, such as CatBoost [34], MLP [35], and RF [36], are used to learn the mapping relationship, and a ten-fold cross-validation method is adopted to ensure the robustness and generalization ability of the selected model on different data subsets. The model most suitable for remote sensing image coarse fusion is selected according to the average accuracy of ten trainings and applied to the high-resolution UAV image to achieve coarse fusion.
The fusion mechanism of coarse fusion will generate certain errors during operation, resulting in the phenomenon of salt-and-pepper noise and also interfering with the spectral information of the UAV image. Therefore, after coarse fusion, it is necessary to perform a fine fusion. In this study, the PCA fusion method is used to extract the key features of the coarse fusion image, remove redundant information, and fuse it with the original UAV image to further fuse the spectral information of the UAV, improve the fusion quality of the image, and achieve a fine representation of the ground object features.

2.4. Construction of the LAI Inversion Model Based on Machine Learning

2.4.1. Multi-Scale Sampling Points Window Selection

In order to determine the most appropriate scale for feature extraction, this study conducts feature extraction on the selected features using multi-scale sampling windows. That is, taking the feature raster points corresponding to different data modes at the sampling points as the center, as shown in the rightmost matrix diagram in Figure 1—Step 1, 0, 1, 2, and 3 circles of rasters are added, respectively, to form feature sampling windows with sizes of 1, 3, 5, and 7. Then, the arithmetic mean of all raster values within the feature sampling windows of different sizes at the spatial positions corresponding to all sampling points is calculated, and this value represents the feature under the window area.

2.4.2. Feature Engineering

In this paper, a total of five data modes are designed (Figure 3 (Step 2)), namely Sentinel-2, UAV, Brovey, PCA, and Two-stage remote sensing data fusion method (Fine-Fusion). Among them, Sentinel-2 and UAV are single source remote sensing image data modes. Brovey, PCA, and Fine-Fusion are data modes in which the Brovey algorithm, PCA algorithm, and the algorithm proposed in this paper are used to fuse UAV and Sentinel-2 images respectively. The remote sensing data under these five data modes are subjected to feature extraction, screening, etc. through the following methods, and finally used for LAI inversion.
Based on the previous research results, this study constructed a relatively comprehensive feature set system. This system includes 15 vegetation index features in five categories, such as atmospheric vegetation index, structural vegetation index, pigment vegetation index, water vegetation index, and soil vegetation index (Table 2), as well as four spectral features, such as green, red, red edge, and near-infrared, and the canopy temperature feature extracted from the thermal infrared image, which can reflect the characteristics of vegetation from different spectral bands and canopy temperature information.
There are a total of 20 features in the feature set of this study. To screen the most suitable feature combination for LAI fine inversion, we analyzed the contribution of different features to the LAI inversion model based on the SHapley Additive exPlanations (SHAP) model. SHAP is based on the cooperative game theory method and can capture the nonlinear and complex relationships between features and model outputs. It can handle high-dimensional data and has strong interpretability. Its core idea is to explain the prediction result of the model as the sum of the contributions of each feature. For a given prediction model f ( x ) , where x = ( x 1 , x 2 , , x m ) represents the input feature vector, the prediction result of the model can be expressed as:
f ( x ) = g ( E [ x ] ) + i = 1 m i ( x )
where g ( E [ x ] ) is the expected prediction value of the model, and i ( x ) is the SHAP value of the i-th feature, representing the contribution of this feature to the prediction result. The calculation of the SHAP value is based on the concept of the Shapley value. For a feature x i , the calculation formula of its SHAP value i ( x ) is as follows:
i ( x ) = S x i S ! ( m S 1 ) ! m ! [ f S x i ( x ) f S ( x ) ]
where, for the i-th feature, its SHAP value i ( x ) is equal to the sum of all feature subsets S that do not contain the feature x i , m is the total number of features, f S x i ( x ) is the model prediction value under the joint action of the feature subset S and the feature x i , and f S ( x ) is the model prediction value under the action of only the feature subset S.
SHAP can only represent the contribution of each feature to the LAI inversion model and cannot determine the final feature combination for application. Therefore, after ranking the feature contributions based on SHAP values, we incrementally add one feature at a time from the highest-ranked to the lowest-ranked for the training and validation of the LAI inversion model. By analyzing and comparing the changes in the validation accuracy of the model with different numbers of features, and considering the model performance, we aim to select the minimum number of features to achieve a relatively high inversion accuracy, ultimately determining a more appropriate feature combination.

2.5. Modeling and Evaluation

2.5.1. Modeling

In this study, to achieve the fine inversion of the maize leaf area index (LAI), we adopted a variety of machine learning and ensemble learning models based on trees, including Decision Tree [51], AdaBoost [52], Gradient Boosting [53], XGBoost [54], and Random Forest [36]. To help evaluate the relative advantages and overfitting phenomena of nonlinear models in terms of accuracy and generalization ability, we added Linear Regression [55]. These models cover a variety of algorithm types, such as linear and nonlinear (Figure 3 (Step 3)).
Among them, the Decision Tree recursively divides the data through a tree structure and is suitable for handling complex nonlinear relationships, but it is prone to overfitting. To improve the generalization ability of the model, this paper further uses ensemble learning methods based on decision trees, including AdaBoost, Gradient Boosting, and XGBoost. AdaBoost improves the robustness to noise by iteratively training multiple weak learners and combining them with weights; Gradient Boosting optimizes the loss function by gradually building decision trees and exhibits stronger nonlinear fitting ability. XGBoost, as an improved version of Gradient Boosting, further improves the stability and accuracy of the model through more efficient calculation methods and regularization. Random Forest is an ensemble model based on multiple decision trees. It reduces the overfitting risk through random sampling and feature selection and is particularly suitable for handling high-dimensional data and datasets with multiple features. The model has certain robustness and generalization performance.
To optimize the performance of these non-linear models, this study employs the grid search method to adjust common hyperparameters such as maximum depth, minimum samples required for splitting, number of estimators, and learning rate. Here, the maximum depth and the number of estimators determine the growth depth and quantity of the trees, respectively. If their values are too high, overfitting is likely to occur; if too low, underfitting may result. The minimum number of samples affects the data-splitting decision. If it is too small, the model may be overly sensitive to local data, leading to overfitting; if too large, it will be difficult to capture subtle features. The learning rate controls the step size of parameter updates. A too large learning rate may cause the model to miss the optimal solution, while a too small one will slow down the training speed. A series of preset hyperparameter values is applied to the model one by one. By comparing the accuracies of the inversion model when applying different hyperparameters, the most suitable hyperparameter combination is determined, thus balancing the model’s fitting and generalization abilities and reducing the risk of overfitting.

2.5.2. Evaluation Metrics

To comprehensively evaluate the performance of the remote sensing image coarse fusion and LAI fine inversion models, this paper selects three indicators: R2 (Coefficient of Determination), MAE (Mean Absolute Error), and RMSE (Root Mean Square Error). These indicators can help reveal the fidelity of the spatial spectral information after image fusion and the degree of deviation of the predicted value, and examine the accuracy and robustness of the model in LAI fine inversion. The calculation formulas of R2, MAE, and RMSE are shown in Formulas (5)–(7):
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
M A E = 1 n i = 1 n y i y ^ i
R M S E = 1 n i = 1 n y i y ^ i 2
where y i is the observed value, y ^ i is the model predicted value, y ¯ is the average value of the observed values, and n is the total number of samples. R 2 measures the correlation between the model predicted value and the observed value, and the value range is [0, 1]. The closer it is to 1, the better the fitting effect of the model; the smaller the values of MAE and RMSE, the smaller the error of the model.

3. Results

3.1. Accuracy of Coarse Fusion in Two-Stage Remote Sensing Data Fusion Method

To achieve the fine fusion of multi-source remote sensing images, this study combined coarse fusion and fine fusion and proposed a two-stage remote sensing data fusion method. Figure 4 shows the average accuracy results of different models for all bands in the coarse fusion, as well as for four different bands: green, red, red edge, and near infrared. Among them, the CatBoost model has the best performance, with an average R 2 value as high as 0.897, indicating that the model has a good fit to the data. The average MAE value is only 0.0009, which means that the average absolute deviation between the reflectivity of the coarsely fused image and that of Sentinel-2 is extremely small. In contrast, the performance of models such as AdaBoost, PLSR, ridge regression, SVR, and Extra-Tree in this evaluation is relatively weak. In terms of the accuracy of coarse fusion for different bands, the accuracy of coarse fusion in the green band is relatively low. The accuracy of coarse fusion in the red and red edge bands is the best, followed by that in the near-infrared band.

3.2. Training of LAI Fine Inversion Models

3.2.1. Selection of Sampling Points Extraction Window

Figure 5 illustrates the impacts of different feature sampling window scales, data modes, and models on the LAI inversion model. The analysis indicates that the applicable sampling windows vary across different data modes. Except for the Sentinel-2 data pattern, most of the other data modes achieve the best results when using a 3 × 3 window scale for LAI inversion with different models. When the sampling windows are 5 and 7, respectively, their coverage is relatively large. As a result, the spatial detail features are smoothed due to the calculation of the average value, weakening the model’s ability to capture detail changes and reducing the consistency with the sampling point range. Therefore, in this study, the 3 × 3 sampling window is selected for the final fine LAI inversion.
Compared with the single-source data modes, methods such as Brovey, PCA, and two-stage remote sensing data fusion (Fine-fusion) can all improve the accuracy of fine LAI inversion. Among them, the two-stage remote sensing data fusion method proposed in this paper is more effective, significantly outperforming the single-source data modes of UAV and Sentinel-2. A comparison of various models shows that the linear regression and decision tree methods perform poorly, while the random forest algorithm yields the best results for the fine LAI inversion of maize.

3.2.2. Interpreting LAI Inversion Models

In the training of the LAI fine inversion model, since the Random Forest algorithm performs relatively well, this paper quantitatively analyzes the contribution of each feature to the model output based on the SHAP model. Figure 6a,b shows the overall feature contribution of the inversion model. The canopy temperature has the greatest contribution to the fine inversion of LAI. The higher the canopy temperature, the lower its SHAP value (Figure 6c), indicating that this feature usually reduces the output value of the model. Different crop growth stages have different physiological characteristics and morphological structures, and the relationship between the canopy temperature and LAI is also different. MCARI2 (Modified Chlorophyll Absorption Ratio Index 2) and GCI (Green Chlorophyll Index) show relatively large positive SHAP values.
NGI (Normalized Green Index) and NDWI (Normalized Difference Water Index) respectively show an inhibitory effect on the LAI inversion result. An increase in LAI means that the number of leaves per unit area increases, and the mutual shielding between leaves intensifies. This causes the absorption and transmission of green light by the leaves to change. More green light is absorbed and scattered by the upper leaves, and the green light reflected to the sensor is reduced, resulting in a decrease in NGI. NDWI is mainly used to extract surface water body information. When the LAI value is low, that is, the vegetation coverage is low, the influence of the surface water body on the remote sensing signal is more significant, resulting in a higher NDWI value; when the LAI value is high, the shielding effect of the vegetation on the remote sensing signal is enhanced, resulting in a lower NDWI value.
This study finds that with the increase in features, the performance of each model first increases and then tends to be stable by comparing and analyzing the influence of the number of features on the model accuracy (Figure 6d–f). The stability of the Decision Tree is relatively poor, and the stability of XGBoost and AdaBoost sometimes fluctuates. The stability of the Random Forest and Gradient Boosting is better, and the accuracy change is relatively gentle with the increase in the number of features. When the number of features is equal to 9, although the accuracy is not the highest, it is also relatively good. Considering factors such as accuracy, generalization ability, robustness, and running efficiency of the model, this study uses 9 features for the fine inversion of LAI, namely: Temperature, MCARI2, GCI, NGI, DVI, NIR, RESR, NDWI, and MDD. The final determined feature combination not only includes vegetation index features such as structural vegetation index and pigment vegetation index but also considers canopy temperature and spectral information, verifying the effectiveness of SHAP in feature screening.

3.2.3. Model Parameter Selection Based Grid Search Method

To further optimize the hyperparameter configuration of each LAI inversion model, this paper adopts the grid search method and adjusts the key hyperparameters of each model, including the maximum depth of the tree (max_depth), the learning rate (learning_rate), and the number of learners (n_estimators). The results are shown in Figure 7, which shows the accuracy of each model when applying different hyperparameters. The golden star indicates the optimal value of a certain hyperparameter of a certain model.
Table 3 shows the final optimized hyperparameter combinations and their corresponding accuracy performances for each model based on the Fine-fusion data mode and a 3 × 3 feature sampling window. Combined with Figure 7, each model achieves an improvement in accuracy and stability under the optimal parameter configuration. Taking the Random Forest model as an example, it achieves the best effect under the parameter configuration of n_estimators = 140, min_samples_split = 9, and max_depth = 18, with R 2 reaching 0.90, and the MAE and RMSE values are 0.32 m2/m2 and 0.38 m2/m2, respectively. For the AdaBoost and Gradient Boosting models, their performance can also be slightly improved by adjusting the hyperparameters. Overall, by adjusting the hyperparameters of each model through the grid search, the performance of R 2 , MAE, and RMSE of each model can be improved, and each model can maintain relatively stable under the optimal hyperparameter combination.

3.3. LAI Inversion Model Performance

3.3.1. Quantitative Evaluation of LAI Fine Inversion Model

When quantitatively evaluating the performance of the LAI fine inversion model, we use the optimal feature sampling window, the optimal feature combination, and the optimal model hyperparameters, and train and validate the model with a ratio of training set to validation set of 8:2. As shown in Figure 8, integrated learning models such as Random Forest, AdaBoost, and Gradient Boosting show relatively high fitting accuracy throughout the entire growth stage. However, the R 2 of Decision Tree, Gradient Boosting, and XGBoost reaches 1 during training, indicating that these models have a certain risk of overfitting. The linear regression model is relatively simple and is not easy to overfit even with a small data sample, but the model accuracy is relatively low. The training accuracy and validation accuracy of AdaBoost and Random Forest are not much different, indicating that these models have relatively stronger generalization ability, better robustness, and relatively higher accuracy. Among them, the R 2 of the Random Forest algorithm during training is 0.95, and RMSE is 0.25 m2/m2. The R 2 during validation is 0.90, and RMSE is 0.38 m2/m2, with high accuracy, which can meet the needs of fine inversion of maize LAI.
The inversion accuracy of the Random Forest algorithm in different growth stages is relatively consistent and clustered around the 1:1 line. The scatter points of linear regression and Gradient Boosting are relatively scattered. From the perspective of a single model in different growth stages, it is found that the model fitting effect in the jointing stage is better than that in other stages and the scatter points are more clustered towards the 1:1 line. The scatter points in the filling stage are generally more scattered, and the simulation accuracy is also significantly lower than that in the jointing stage (Table 4).
As shown in Table 4, in the jointing stage, the R 2 of AdaBoost reaches 0.9, and the R 2 of the Random Forest reaches 0.87, which is significantly higher than that of the linear regression model ( R 2 is 0.7), showing the advantage of the nonlinear model in capturing complex vegetation growth patterns. In the longitudinal comparison, the performance of the Random Forest model in different stages is relatively outstanding, verifying its applicability and stability in inverting the LAI of maize in different growth stages.
In the horizontal comparison, the accuracy of each model in the jointing stage is better than that in the tasseling stage, and the accuracy of each model in the tasseling stage is better than that in the filling stage. However, due to the insufficient sample data used in this study, the accuracy of each model in a single growth stage is generally lower than that of the LAI inversion in the whole growth stage.

3.3.2. Qualitative Evaluation of the LAI Fine Inversion Model

This study adopts five data modes: Sentinel-2, UAV, Brovey [56], PCA [57], and Fine-fusion. Based on the optimal feature combination, a 3 × 3 feature sampling window is set, and the optimal model hyperparameters are used. The Random Forest algorithm is used to invert the leaf area index (LAI) in different growth stages. The LAI spatial distribution in different growth stages under different data modes is shown in Figure 9. The inversion results show that the LAI values in each growth stage are generally below 4.3, which is consistent with the distribution of the field sampling data. Under each data mode, with the progress of the growth stage, the LAI value shows an overall upward trend, which is in line with the actual growth situation of maize in the study area. The LAI distribution inverted by the data fusion method (Fine-fusion) proposed in this paper has more significant heterogeneity and can distinguish the LAI distribution differences in different sowing periods. For example, in the jointing stage, the growth of the farmer-planted area and the first-sowing area is significantly better than that of the second-sowing area, and the growth of the second-sowing area is better than that of the third-sowing area (at this time, the second-sowing area has entered the early tasseling stage), while the growth differences in different sowing periods under other data modes are almost indistinguishable.
The LAI inverted by the Sentinel-2 data mode is relatively uniform in spatial distribution due to its limited spatial resolution, and the overall value is relatively low, making it difficult to effectively distinguish the detailed differences of vegetation. In the UAV data mode, although the fine degree of the LAI spatial distribution is relatively high, the detailed inversion effect of the LAI is not excellent enough, and the LAI distribution differences in different sowing periods are not obvious enough, indicating that the inversion accuracy of this data mode is not high enough. Although the two fusion data modes of Brovey and PCA have some characteristics of the UAV image and the Sentinel-2 image, respectively, because they are directly fused with the resampled Sentinel-2 image, the low spatial resolution has a certain negative impact on the fusion effect, resulting in the loss of many details in the data fusion process.
Figure 10 shows the local enlarged views of the LAI inversion results under each mode in the tasseling stage, including the original and 0.5 m spatial resolution UAV true color images, UAV false color images, and the LAI inversion results corresponding to the Sentinel-2, UAV, Brovey, PCA, and Fine-fusion modes (the spatial resolution is 0.5 m). The Fine-fusion mode can clearly show the vegetation boundary and structural details. Compared with the LAI distribution in the UAV data mode, the LAI distribution inside the maize plot in this mode shows more significant texture detail information, which makes it more advantageous in reflecting the vegetation situation inside the plot. The LAI distribution in the Sentinel-2 data mode is relatively uniform. Since the sampling points are only set in the maize plot, due to its limited spatial resolution, the LAI value of the bare soil is higher than that of the maize plot, which is inconsistent with the actual situation. This data characteristic makes it difficult to directly apply it to the monitoring of the vegetation growth status at the plot scale, and its low spatial resolution also cannot provide effective guidance for agricultural production. In the PCA and Brovey fusion data modes, the influence of Sentinel-2 can be clearly observed. Among them, the Brovey mode is more significantly affected by its fusion principle, and the LAI value of the bare soil is slightly higher than that of the maize plot; the PCA fusion mode can extract the important spectral features of the images to be fused. Although the LAI inversion result is also relatively uniform under the influence of Sentinel-2, the texture information of the LAI can still be seen, which to some extent reflects the role of the PCA mode in retaining spectral features during the fusion process, which is also the reason why this study selects the PCA data fusion method for the fine fusion stage.

4. Discussion

4.1. Feature Extraction and Selection

The two-stage remote sensing data fusion method proposed in this study, combined with thermal infrared imagery, selects 9 key features based on the SHAP model. Among them, canopy temperature contributes the most to the fine LAI inversion. This can be explained as follows: in the early growth stage, with a relatively small LAI, the canopy temperature is significantly affected by factors such as soil temperature, usually resulting in a relatively high canopy temperature. In the later growth stage, when the LAI reaches a relatively large value, the canopy structure becomes complex, and the internal heat transfer and heat exchange with the outside world change. The variation pattern of canopy temperature is different from that in the early stage. When the crop water content is relatively sufficient, the canopy temperature is usually lower than that of bare soil.
Two features, MCARI2 and GCI, also exhibit relatively large positive SHAP values. These two features are closely related to the chlorophyll content and photosynthetic activity of vegetation. During the plant growth process, chlorophyll content and photosynthetic activity are key factors influencing LAI. A higher chlorophyll content generally indicates that the vegetation has a stronger photosynthetic capacity, enabling it to support more leaf growth and thus leading to an increase in the LAI value.
This study also optimally selects the feature sampling window scale, which is related to the actual spatial resolution of each data pattern. Although the Sentinel-2 image is resampled to 0.5 m, its actual spatial resolution remains 10 m. When the size of the sampling window is larger, the LAI inversion effect of this pattern is better. Given that the spatial resolution of each image is 0.5 m, when the sampling window is 1 × 1, a single raster may not even cover a single maize plant. When the sampling window is 3 × 3, its coverage is 1.5 m × 1.5 m, which is relatively consistent with the actual sampling range, and the LAI inversion accuracy at this scale is relatively the highest.

4.2. LAI Inversion by Remote Sensing

By comparing the most basic linear regression model with a variety of advanced machine learning or ensemble learning models, the Random Forest algorithm with the highest accuracy is finally adopted for the fine LAI inversion of maize. The results show that the R2 of the Random Forest model for LAI inversion throughout the entire growth period reaches 0.90, and the RMSE is as low as 0.38 m2/m2. The accuracy of different growth stages also shows the best overall performance.
The Random Forest constructs multiple decision trees and randomly selects features, effectively reducing the overfitting risk and improving the generalization ability. The results show that the R 2 of the whole growth stage model reaches 0.90, and the RMSE is as low as 0.38 m2/m2. Compared with the Random Forest model for the whole growth stage maize LAI inversion by Shuaibing Liu et al. (=0.83) [58], the method in this study performs better. In the analysis of different growth stages, the R 2 in the jointing stage is 0.87, 0.86 in the tasseling stage, and 0.62 in the filling stage. In contrast, the R 2 of the Random Forest model combined with thermal infrared data by Xingjiao Yu et al. in the jointing, tasseling, and filling stages of maize are 0.726, 0.715, and 0.732, respectively [59], indicating that the method proposed in this paper has a greater advantage in the first two growth stages. However, the inversion accuracy in the filling stage is relatively low, which may be affected by the amount and quality of the data and needs to be further explored in future studies.

4.3. The Advantages of Remote Sensing Data Fusion

The two-stage remote sensing data fusion method proposed in this study combines the advantages of UAV and Sentinel-2 images: the high spatial resolution of the UAV can clearly depict the boundaries between vegetation and bare land and the fine structures within the field, while the rich spectral information of Sentinel-2 helps to accurately identify the growth status of vegetation [60]. From the perspective of the principle and process of data fusion, the innovation of this study lies in the two-stage fusion strategy. First, by learning the stable and rich spectral information of Sentinel-2 to generate a high-resolution simulated “Sentinel-2 image”, coarse fusion is achieved; then, the image generated by coarse fusion is finely fused with the original UAV image, rather than directly fusing two datasets with a large difference in spatial resolution, avoiding the problem of image spots and detail loss caused by the influence of the spatial resolution of Sentinel-2 in the traditional method.
In the coarse fusion stage, 10 machine learning and ensemble learning models are used to mine the spectral mapping relationship between the UAV and Sentinel-2 images, effectively integrating the spectral characteristics of the two data sources. The CatBoost model performs best in coarse fusion. This model is a model under the Gradient Boosting framework, which can better capture the interaction relationship between features and adopts a variety of techniques to prevent overfitting, with good robustness [61]. In coarse fusion, its average R 2 is as high as 0.897, and the average MAE is only 0.0009, indicating that this model can accurately map the spectral characteristics of the UAV data to the space matching that of Sentinel-2 data, significantly improving the spectral resolution and information richness of the fused image. In the fine fusion stage, the PCA method is used to extract the principal components based on the data statistical characteristics, reducing data redundancy and highlighting the key spatial features [62]. The fused image not only retains important spectral information but also effectively reduces noise and interference, providing more comprehensive and accurate data for the LAI inversion model and significantly improving the inversion accuracy and detail fidelity.

4.4. Limitations and Future Work

In terms of data collection, the ground-measured sample size in this study is relatively small. Although the sampling process is based on the vegetation index calculated from the Sentinel-2 satellite image for regional division and adopts a stratified sampling strategy, due to limitations such as manpower, material resources, and time, the number of sample points is still insufficient to fully cover the diversity of vegetation growth conditions and soil conditions in the study area [63]. A small sample size may lead to the model being difficult to fully capture all potential relationships and patterns during the learning process, thus affecting the generalization ability.
In terms of the data fusion method, more advanced algorithms or deep learning models can be tried in the future, such as convolutional neural networks [64] to enhance the image feature extraction ability and better mine the complex relationships in multi-source remote sensing data, thereby improving the spectral mapping accuracy. In terms of the fused data sources, in the future, it is possible to explore more types of remote sensing data modalities, such as LiDAR data and hyperspectral imagery. For different research scales, different satellite remote sensing image data can also be selected, like the Landsat series data. In addition, the PCA method from the fine fusion stage may lose some nonlinear information in the original data. Future work can explore fusion methods that can better retain nonlinear features to further improve the fusion data quality and inversion accuracy.

5. Conclusions

This study aimed to address the problems that a single data source of UAV or satellite remote sensing is difficult to meet the requirements of high spatial resolution and spectral information integrity and consistency at the plot scale in the inversion of the maize leaf area index (LAI), as well as the insufficient inversion of LAI details in complex habitats. A method for the interpretable fine inversion of LAI by fusing satellite, UAV multispectral, and thermal infrared images was proposed. This has confirmed the effectiveness of this method for the inversion of maize leaf area index and, meanwhile, provided new ideas for the application of multi-source remote sensing data fusion. In the future, this method can be applied to the precision management of maize cultivation, such as precisely formulating irrigation and fertilization plans. Additionally, attempts can be made to apply this model and method to other crops, such as wheat and rice. Moreover, apart from the existing data sources, further exploration can be carried out on fusing more data modalities and satellite images from different sensors. The specific conclusions are as follows:
  • Through the two-stage remote sensing data fusion method proposed in this paper, the combination of spatial and spectral integrity and consistency was achieved, providing a high-quality data basis for LAI inversion and significantly improving the accuracy and detail fidelity of the inversion model.
  • Under the conditions of the optimal 3 × 3 feature sampling window and 9 features including canopy temperature, the inversion accuracy of the Random Forest model in the whole growth stage reaches = 0 .90, RMSE = 0.38 m2/m2. Compared with the single UAV data mode (=0.73), the fusion mode in this paper increases R 2 by nearly 25%; the R 2 values in the jointing, tasseling, and filling stages are 0.87, 0.86, and 0.62, respectively, and the RMSE values are 0.28 m2/m2, 0.15 m2/m2, and 0.42 m2/m2, respectively. Simultaneously, the details inside the plot are more completely retained, and the difference between the bare soil area and the maize plot is more significant. The LAI in different sowing periods and growth stages has a better distinguishing effect than that inverted by a single image source.
  • This study verified the effectiveness of the UAV thermal infrared imagse in LAI inversion, indicating that there is a certain correlation between the canopy temperature and LAI, providing a theoretical basis for the fine inversion of LAI by fusing thermal infrared data.

Author Contributions

Conceptualization, Y.Y. and Z.L.; methodology, Y.Y.; software, Y.Y.; validation, Y.Y., H.W. and X.Y.; formal analysis, Y.Y.; investigation, Y.Y. and H.W.; resources, Z.L.; data curation, Y.Y., X.G. and S.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y., Y.Z., S.L., X.Z. and Z.L.; visualization, Y.Y.; supervision, Z.L.; project administration, Z.L. and X.Z.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (SQ2022YFB3900025) and the National Key R&D Program of China and Shandong Province, China (2021YFB3901300).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to our collaborating institutions stillutilizing part of the data.

Acknowledgments

The authors would like to express their sincere gratitude to the graduate students in the same research group, namely Xinqi Fan, Mumingrui Li, Siyuan Chen, Guijia Lyu, and Jialong Hu, who actively participated in the field sample data collection together with us and provided significant assistance throughout the process.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, Z.; Shen, G.; Hong, T.; Yu, M.; Li, B.; Gu, Y.; Guo, Y.; Han, J. The Nutritive Quality Comparison of the Processed Fresh Sweet-Waxy Corn from Three Regions in China. J. Food Compos. Anal. 2024, 126, 105872. [Google Scholar] [CrossRef]
  2. Zhang, Y.; Xia, C.; Zhang, X.; Cheng, X.; Feng, G.; Wang, Y.; Gao, Q. Estimating the Maize Biomass by Crop Height and Narrowband Vegetation Indices Derived from UAV-Based Hyperspectral Images. Ecol. Indic. 2021, 129, 107985. [Google Scholar] [CrossRef]
  3. Kalogeropoulos, G.; Elli, E.F.; Trifunovic, S.; Archontoulis, S.V. Historical Increases of Maize Leaf Area Index in the US Corn Belt Due Primarily to Plant Density Increases. Field Crops Res. 2024, 318, 109615. [Google Scholar] [CrossRef]
  4. Huang, X.; Lin, D.; Mao, X.; Zhao, Y. Multi-Source Data Fusion for Estimating Maize Leaf Area Index over the Whole Growing Season under Different Mulching and Irrigation Conditions. Field Crops Res. 2023, 303, 109111. [Google Scholar] [CrossRef]
  5. Guo, Y.; Hao, F.; Zhang, X.; He, Y.; Fu, Y.H. Improving Maize Yield Estimation by Assimilating UAV-Based LAI into WOFOST Model. Field Crops Res. 2024, 315, 109477. [Google Scholar] [CrossRef]
  6. Wang, X.; Ren, J.; Wu, P. Analysis of Growth Variation in Maize Leaf Area Index Based on Time-Series Multispectral Images and Random Forest Models . Agronomy 2024, 14, 2688. [Google Scholar] [CrossRef]
  7. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef]
  8. Toscano, F.; Fiorentino, C.; Capece, N.; Erra, U.; Travascia, D.; Scopa, A.; Drosos, M.; D’Antonio, P. Unmanned Aerial Vehicle for Precision Agriculture: A Review. IEEE Access 2024, 12, 69188–69205. [Google Scholar] [CrossRef]
  9. Gao, X.; Yao, Y.; Chen, S.; Li, Q.; Zhang, X.; Liu, Z.; Zeng, Y.; Ma, Y.; Zhao, Y.; Li, S. Improved Maize Leaf Area Index Inversion Combining Plant Height Corrected Resampling Size and Random Forest Model Using UAV Images at Fine Scale. Eur. J. Agron. 2024, 161, 127360. [Google Scholar] [CrossRef]
  10. Parida, P.K.; Somasundaram, E.; Krishnan, R.; Radhamani, S.; Sivakumar, U.; Parameswari, E.; Raja, R.; Shri Rangasami, S.R.; Sangeetha, S.P.; Gangai Selvi, R. Unmanned Aerial Vehicle-Measured Multispectral Vegetation Indices for Predicting LAI, SPAD Chlorophyll, and Yield of Maize. Agriculture 2024, 14, 1110. [Google Scholar] [CrossRef]
  11. Guo, A.; Ye, H.; Huang, W.; Qian, B.; Wang, J.; Lan, Y.; Wang, S. Inversion of Maize Leaf Area Index from UAV Hyperspectral and Multispectral Imagery. Comput. Electron. Agric. 2023, 212, 108020. [Google Scholar] [CrossRef]
  12. Wang, Y.; Wang, P.; Tansey, K.; Liu, J.; Delaney, B.; Quan, W. An Interpretable Approach Combining Shapley Additive Explanations and LightGBM Based on Data Augmentation for Improving Wheat Yield Estimates. Comput. Electron. Agric. 2025, 229, 109758. [Google Scholar] [CrossRef]
  13. Ming, L.; Wang, Y.; Liu, G.; Meng, L.; Chen, X. Analysis of Vegetation Dynamics from 2001 to 2020 in China’s Ganzhou Rare Earth Mining Area Using Time Series Remote Sensing and SHAP-Enhanced Machine Learning. Ecol. Inform. 2024, 84, 102887. [Google Scholar] [CrossRef]
  14. Yan, P.; Feng, Y.; Han, Q.; Hu, Z.; Huang, X.; Su, K.; Kang, S. Enhanced Cotton Chlorophyll Content Estimation with UAV Multispectral and LiDAR Constrained SCOPE Model. Int. J. Appl. Earth Obs. Geoinf. 2024, 132, 104052. [Google Scholar] [CrossRef]
  15. Ma, H.; Song, J.; Wang, J.; Xiao, Z.; Fu, Z. Improvement of Spatially Continuous Forest LAI Retrieval by Integration of Discrete Airborne LiDAR and Remote Sensing Multi-Angle Optical Data. Agric. For. Meteorol. 2014, 189–190, 60–70. [Google Scholar] [CrossRef]
  16. Rivera, G.; Porras, R.; Florencia, R.; Sánchez-Solís, J.P. LiDAR Applications in Precision Agriculture for Cultivating Crops: A Review of Recent Advances. Comput. Electron. Agric. 2023, 207, 107737. [Google Scholar] [CrossRef]
  17. Yang, H.; Wang, L.; Zhang, X.; Shi, Y.; Wu, Y.; Jiang, Y.; Wang, X. Exploring Optimal Soil Moisture for Seedling Tomatoes Using Thermal Infrared Imaging and Chlorophyll Fluorescence Techniques. Sci. Hortic. 2025, 339, 113846. [Google Scholar] [CrossRef]
  18. Kukal, M.; Irmak, S. Transpiration Dynamics in Co-Located Maize, Sorghum, and Soybean Closed Canopies and Their Environmental Controls. J. Nat. Resour. Agric. Ecosyst. 2024, 2, 1–15. [Google Scholar] [CrossRef]
  19. Jiang, J.; Zhang, Q.; Wang, W.; Wu, Y.; Zheng, H.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. MACA: A Relative Radiometric Correction Method for Multiflight Unmanned Aerial Vehicle Images Based on Concurrent Satellite Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  20. Dong, T.; Liu, J.; Qian, B.; Zhao, T.; Jing, Q.; Geng, X.; Wang, J.; Huffman, T.; Shang, J. Estimating Winter Wheat Biomass by Assimilating Leaf Area Index Derived from Fusion of Landsat-8 and MODIS Data. Int. J. Appl. Earth Obs. Geoinf. 2016, 49, 63–74. [Google Scholar] [CrossRef]
  21. Liu, T.; Duan, S.-B.; Liu, N.; Wei, B.; Yang, J.; Chen, J.; Zhang, L. Estimation of Crop Leaf Area Index Based on Sentinel-2 Images and PROSAIL-Transformer Coupling Model. Comput. Electron. Agric. 2024, 227, 109663. [Google Scholar] [CrossRef]
  22. Delavarpour, N.; Koparan, C.; Nowatzki, J.; Bajwa, S.; Sun, X. A Technical Study on UAV Characteristics for Precision Agriculture Applications and Associated Practical Challenges. Remote Sens. 2021, 13, 1204. [Google Scholar] [CrossRef]
  23. Li, Y.; Ma, Q.; Chen, J.; Croft, H.; Luo, X.; Zheng, T.; Rogers, C.; Liu, J. Fine-Scale Leaf Chlorophyll Distribution across a Deciduous Forest through Two-Step Model Inversion from Sentinel-2 Data. Remote Sens. Environ. 2021, 264, 112618. [Google Scholar] [CrossRef]
  24. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A Compilation of UAV Applications for Precision Agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  25. Alvarez-Vanhard, E.; Corpetti, T.; Houet, T. UAV & Satellite Synergies for Optical Remote Sensing Applications: A Literature Review. Sci. Remote Sens. 2021, 3, 100019. [Google Scholar] [CrossRef]
  26. Popescu, D.; Stoican, F.; Stamatescu, G.; Ichim, L.; Dragana, C. Advanced UAV–WSN System for Intelligent Monitoring in Precision Agriculture. Sensors 2020, 20, 817. [Google Scholar] [CrossRef]
  27. Zhang, L.; Shen, H. Progress and Future of Remote Sensing Data Fusion. Natl. Remote Sens. Bull. 2016, 20, 1050–1061. [Google Scholar] [CrossRef]
  28. Chen, Y.; Zhao, M.; Bruzzone, L. A Novel Approach to Incomplete Multimodal Learning for Remote Sensing Data Fusion. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
  29. Jing, W.; Lou, T.; Wang, Z.; Zou, W.; Xu, Z.; Mohaisen, L.; Li, C.; Wang, J. A Rigorously-Incremental Spatiotemporal Data Fusion Method for Fusing Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 6723–6738. [Google Scholar] [CrossRef]
  30. Yin, Z.; Wu, P.; Li, X.; Hao, Z.; Ma, X.; Fan, R.; Liu, C.; Ling, F. Super-Resolution Water Body Mapping with a Feature Collaborative CNN Model by Fusing Sentinel-1 and Sentinel-2 Images. Int. J. Appl. Earth Obs. Geoinf. 2024, 134, 104176. [Google Scholar] [CrossRef]
  31. Abunnasr, Y.; Mhawej, M. Towards a Combined Landsat-8 and Sentinel-2 for 10-m Land Surface Temperature Products: The Google Earth Engine Monthly Ten-ST-GEE System. Environ. Model. Softw. 2022, 155, 105456. [Google Scholar] [CrossRef]
  32. Shi, W.; Li, Y.; Zhang, W.; Yu, C.; Zhao, C.; Qiu, J. Monitoring and Zoning Soybean Maturity Using UAV Remote Sensing. Ind. Crops Prod. 2024, 222, 119470. [Google Scholar] [CrossRef]
  33. Xin, J.; Ming, B.; Xue, B.; Yang, H.; Guo, H.; Feng, D.; Xie, R.; Wang, K.; Hou, P.; Li, S.; et al. Unmanned Aerial Vehicle Multispectral Remote Sensing for Monitoring of Nitrogen Nutritional Indicators in High-Yielding Spring Maize in Northeast China. J. Maize Sci. 2024, 32, 92–101. [Google Scholar] [CrossRef]
  34. Ajin, R.S.; Segoni, S.; Fanti, R. Optimization of SVR and CatBoost Models Using Metaheuristic Algorithms to Assess Landslide Susceptibility. Sci. Rep. 2024, 14, 24851. [Google Scholar] [CrossRef] [PubMed]
  35. Usta, A. Prediction of Soil Water Contents and Erodibility Indices Based on Artificial Neural Networks: Using Topography and Remote Sensing. Environ. Monit. Assess. 2022, 194, 794. [Google Scholar] [CrossRef]
  36. Li, X.; Jia, H.; Wang, L. Remote Sensing Monitoring of Drought in Southwest China Using Random Forest and eXtreme Gradient Boosting Methods. Remote Sens. 2023, 15, 4840. [Google Scholar] [CrossRef]
  37. Jiang, Z.; Huete, A.; Kim, Y.; Didan, K. 2-Band Enhanced Vegetation Index without a Blue Band and Its Application to AVHRR Data. Proc. SPIE—Int. Soc. Opt. Eng. 2007, 6679, 45–53. [Google Scholar] [CrossRef]
  38. Roujean, J.-L.; Breon, F.-M. Estimating PAR Absorbed by Vegetation from Bidirectional Reflectance Measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  39. Lu, J.; Miao, Y.; Shi, W.; Li, J.; Yuan, F. Evaluating Different Approaches to Non-Destructive Nitrogen Status Diagnosis of Rice Using Portable RapidSCAN Active Canopy Sensor. Sci. Rep. 2017, 7, 14073. [Google Scholar] [CrossRef]
  40. Beck, P.S.A.; Atzberger, C.; Høgda, K.A.; Johansen, B.; Skidmore, A.K. Improved Monitoring of Vegetation Dynamics at Very High Latitudes: A New Method Using MODIS NDVI. Remote Sens. Environ. 2006, 100, 321–334. [Google Scholar] [CrossRef]
  41. Jordan, C.F. Derivation of Leaf-Area Index from Quality of Light on the Forest Floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  42. Gitelson, A.A.; Viña, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote Estimation of Canopy Chlorophyll Content in Crops. Geophys. Res. Lett. 2005, 32, GL022688. [Google Scholar] [CrossRef]
  43. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a Green Channel in Remote Sensing of Global Vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  44. Sharifi, A.; Felegari, S. Remotely Sensed Normalized Difference Red-Edge Index for Rangeland Biomass Estimation. Aircr. Eng. Aerosp. Technol. 2023, 95, 1128–1136. [Google Scholar] [CrossRef]
  45. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial Color Infrared Photography for Determining Early In-Season Nitrogen Requirements in Corn. Agron. J. 2006, 98, 968–977. [Google Scholar] [CrossRef]
  46. Elsayed, S.; Rischbeck, P.; Schmidhalter, U. Comparing the Performance of Active and Passive Reflectance Sensors to Assess the Normalized Relative Canopy Temperature and Grain Yield of Drought-Stressed Barley Cultivars. Field Crops Res. 2015, 177, 148–160. [Google Scholar] [CrossRef]
  47. Erdle, K.; Mistele, B.; Schmidhalter, U. Comparison of Active and Passive Spectral Sensors in Discriminating Biomass Parameters and Nitrogen Status in Wheat Cultivars. Field Crops Res. 2011, 124, 74–84. [Google Scholar] [CrossRef]
  48. McFeeters, S.K. The Use of the Normalized Difference Water Index (NDWI) in the Delineation of Open Water Features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  49. Qi, J.; Chehbouni, A.; Huete, A.; Kerr, Y.; Sorooshian, S. A Modified Soil Adjusted Vegetation Index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  50. Huete, A.R. A Soil-Adjusted Vegetation Index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  51. Cui, X.; Wang, C.; An, S.; Qian, Y. Adaptive Fuzzy Neighborhood Decision Tree. Appl. Soft Comput. 2024, 167, 112435. [Google Scholar] [CrossRef]
  52. Yousefi, M.; Oskoei, V.; Esmaeli, H.R.; Baziar, M. An Innovative Combination of Extra Trees within Adaboost for Accurate Prediction of Agricultural Water Quality Indices. Results Eng. 2024, 24, 103534. [Google Scholar] [CrossRef]
  53. Fan, J.; Zheng, J.; Wu, L.; Zhang, F. Estimation of Daily Maize Transpiration Using Support Vector Machines, Extreme Gradient Boosting, Artificial and Deep Neural Networks Models. Agric. Water Manag. 2021, 245, 106547. [Google Scholar] [CrossRef]
  54. Li, Y.; Zeng, H.; Zhang, M.; Wu, B.; Zhao, Y.; Yao, X.; Cheng, T.; Qin, X.; Wu, F. A County-Level Soybean Yield Prediction Framework Coupled with XGBoost and Multidimensional Feature Engineering. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103269. [Google Scholar] [CrossRef]
  55. Huang, M. Theory and Implementation of Linear Regression. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), Chongqing, China, 10–12 July 2020; pp. 210–217. [Google Scholar]
  56. Chandrasekaran, V. A Segmentation Method of Fused Multispectral and Panchromatic Images Using Quick Shift Algorithm and Brovey Transform. Int. J. Eng. Adv. Technol. 2019, 9, 486–494. [Google Scholar] [CrossRef]
  57. Wu, Z.; Huang, Y.; Zhang, K. Remote Sensing Image Fusion Method Based on PCA and Curvelet Transform. J. Indian Soc. Remote Sens. 2018, 46, 687–695. [Google Scholar] [CrossRef]
  58. Liu, S.; Jin, X.; Bai, Y.; Wu, W.; Cui, N.; Cheng, M.; Liu, Y.; Meng, L.; Jia, X.; Nie, C.; et al. UAV Multispectral Images for Accurate Estimation of the Maize LAI Considering the Effect of Soil Background. Int. J. Appl. Earth Obs. Geoinf. 2023, 121, 103383. [Google Scholar] [CrossRef]
  59. Yu, X.; Huo, X.; Qian, L.; Du, Y.; Liu, D.; Cao, Q.; Wang, W.; Hu, X.; Yang, X.; Fan, S. Combining UAV Multispectral and Thermal Infrared Data for Maize Growth Parameter Estimation. Agriculture 2024, 14, 2004. [Google Scholar] [CrossRef]
  60. Li, W.; Jiang, J.; Guo, T.; Zhou, M.; Tang, P.; Wang, Y.; Zhang, Y.; Cheng, T.; Zhu, Y.; Cao, W.; et al. Generating Red-Edge Images at 3 M Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products. Remote Sens. 2019, 11, 1422. [Google Scholar] [CrossRef]
  61. Zhai, W.; Li, C.; Fei, S.; Liu, Y.; Ding, F.; Cheng, Q.; Chen, Z. CatBoost Algorithm for Estimating Maize Above-Ground Biomass Using Unmanned Aerial Vehicle-Based Multi-Source Sensor Data and SPAD Values. Comput. Electron. Agric. 2023, 214, 108306. [Google Scholar] [CrossRef]
  62. Yang, W.; Wang, J.; Guo, J. A Novel Algorithm for Satellite Images Fusion Based on Compressed Sensing and PCA. Math. Probl. Eng. 2013, 2013, 708985. [Google Scholar] [CrossRef]
  63. Jia, Y.; Gao, J.; Huang, W.; Yuan, Y.; Wang, Q. Exploring Hard Samples in Multi-View for Few-Shot Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
  64. Li, X.; He, H.; Shi, J. HDCCT: Hybrid Densely Connected CNN and Transformer for Infrared and Visible Image Fusion. Electronics 2024, 13, 3470. [Google Scholar] [CrossRef]
Figure 1. Location of the experimental farm and tools in this study. (a,b) are the thumbnail and enlarged views of the study area’s location. (c) is a false—color image of the study area based on Sentinel-2. The yellow box indicates the maize—planting area, and the numbers inside represent the sowing sequence. (d) is a true—color image of the study area captured by a drone. The numbers on each plot denote the plot area, the unit is mu. (e) is a photo of the sample—processing site. (f) shows the DJI Mavic 3M drone, and (g) shows the DJI Mavic 3T drone.
Figure 1. Location of the experimental farm and tools in this study. (a,b) are the thumbnail and enlarged views of the study area’s location. (c) is a false—color image of the study area based on Sentinel-2. The yellow box indicates the maize—planting area, and the numbers inside represent the sowing sequence. (d) is a true—color image of the study area captured by a drone. The numbers on each plot denote the plot area, the unit is mu. (e) is a photo of the sample—processing site. (f) shows the DJI Mavic 3M drone, and (g) shows the DJI Mavic 3T drone.
Agriculture 15 00243 g001
Figure 2. Distribution of LAI samples across different growth stages.
Figure 2. Distribution of LAI samples across different growth stages.
Agriculture 15 00243 g002
Figure 3. Flowchart for the interpretable fine LAI inversion of maize by fusing satellite, UAV multispectral, and thermal infrared images. Step 1 is the process of data processing and integration. Step 2 is the process of feature engineering. Step 3 is the process of constructing and evaluating the LAI inversion model.
Figure 3. Flowchart for the interpretable fine LAI inversion of maize by fusing satellite, UAV multispectral, and thermal infrared images. Step 1 is the process of data processing and integration. Step 2 is the process of feature engineering. Step 3 is the process of constructing and evaluating the LAI inversion model.
Agriculture 15 00243 g003
Figure 4. Average accuracy of ten-fold cross-validation for coarse fusion of UAV and Sentinel-2. (a) represents the R2 of coarse fusion, (b) represents the MAE of coarse fusion.
Figure 4. Average accuracy of ten-fold cross-validation for coarse fusion of UAV and Sentinel-2. (a) represents the R2 of coarse fusion, (b) represents the MAE of coarse fusion.
Agriculture 15 00243 g004
Figure 5. Accuracy of feature sampling window for different sampling points corresponding to different data modes and models.
Figure 5. Accuracy of feature sampling window for different sampling points corresponding to different data modes and models.
Agriculture 15 00243 g005
Figure 6. Feature engineering of the optimal model for maize LAI inversion based on SHapley Additive exPlanations (SHAP) using UAV images. (a) Summary diagram of the optimal LAI inversion model, (b) bar chart of average |SHAP value|, (c) dependency diagram of the best features, the Random Forest model used in (ac), and (df) represent the performance of models with different numbers of features.
Figure 6. Feature engineering of the optimal model for maize LAI inversion based on SHapley Additive exPlanations (SHAP) using UAV images. (a) Summary diagram of the optimal LAI inversion model, (b) bar chart of average |SHAP value|, (c) dependency diagram of the best features, the Random Forest model used in (ac), and (df) represent the performance of models with different numbers of features.
Agriculture 15 00243 g006
Figure 7. Grid search scores for each LAI fine inversion model using UAV data mode. (ac) Grid search score (R2) for XGBoost model, (d,e) Grid search score (R2) for Decision Tree model, (f,g) Grid search score (R2) for AdaBoost model, (h,i) Grid search score (R2) for Gradient Boosting model, (jl) Grid search score (R2) for Random Forest model.
Figure 7. Grid search scores for each LAI fine inversion model using UAV data mode. (ac) Grid search score (R2) for XGBoost model, (d,e) Grid search score (R2) for Decision Tree model, (f,g) Grid search score (R2) for AdaBoost model, (h,i) Grid search score (R2) for Gradient Boosting model, (jl) Grid search score (R2) for Random Forest model.
Agriculture 15 00243 g007
Figure 8. Scatter plots of training accuracy and validation accuracy for the Fine-fusion model with six LAI inversion models.
Figure 8. Scatter plots of training accuracy and validation accuracy for the Fine-fusion model with six LAI inversion models.
Agriculture 15 00243 g008
Figure 9. LAI inversion results of maize in different growth stages with different data modes, except for the inversion results in the Sentinel-2 mode, which have a spatial resolution of 10 m; the spatial resolution of all other results is 0.5 m.
Figure 9. LAI inversion results of maize in different growth stages with different data modes, except for the inversion results in the Sentinel-2 mode, which have a spatial resolution of 10 m; the spatial resolution of all other results is 0.5 m.
Agriculture 15 00243 g009
Figure 10. Local enlarged views of the study area and LAI inverted by different data modes. (a) Original UAV true color image, (b) UAV true color image resampled to 0.5 m, (c) Original UAV false color image, (d) LAI inverted under Sentinel-2 data mode, (e) LAI inverted under UAV data mode, (f) LAI inverted under Brovey data mode, (g) LAI inverted under PCA data mode, (h) LAI inverted under Fine-fusion data mode. The displayed growth stage is the tasseling stage.
Figure 10. Local enlarged views of the study area and LAI inverted by different data modes. (a) Original UAV true color image, (b) UAV true color image resampled to 0.5 m, (c) Original UAV false color image, (d) LAI inverted under Sentinel-2 data mode, (e) LAI inverted under UAV data mode, (f) LAI inverted under Brovey data mode, (g) LAI inverted under PCA data mode, (h) LAI inverted under Fine-fusion data mode. The displayed growth stage is the tasseling stage.
Agriculture 15 00243 g010
Table 1. Information on Sentinel-2 (S-2) bands and corresponding bands of UAV multispectral data.
Table 1. Information on Sentinel-2 (S-2) bands and corresponding bands of UAV multispectral data.
DescriptionS-2
Bands
S-2 Center Wavelength
(nm)
UAV
Bands
UAV Center Wavelength
(nm)
S-2
Acquisition Time
UAV Acquisition Time
GreenB3560B15602024-08-05
2024-08-20
2024-09-09
2024-09-24
2024-08-03
2024-08-20
2024-09-10
2024-09-26
RedB4665B2650
Red Edge
(REG)
B6740B3730
Near Infrared (NIR)B8842B4860
Table 2. List of features. Red, Green, REG and NIR are the red, green, red edge, and near-infrared channel values of the remote sensing images (in nm).
Table 2. List of features. Red, Green, REG and NIR are the red, green, red edge, and near-infrared channel values of the remote sensing images (in nm).
Vegetation IndexFormulationReference
Index ClassAbbreviationFull Name
Atmospheric VIsEVI2Enhanced Vegetation Index 22.5 ∗ (NIR − Red)/(NIR + 2.4 ∗ Red + 1)[37]
Structural VIsDVIDifference Vegetation IndexNIR − Red[38]
MDDModified Difference in DVI(NIR − REG) − (REG − Green)[39]
NDVINormalized Difference Vegetation Index(NIR − Red)/(NIR + Red)[40]
RVIRatio Vegetation IndexNIR/Red[41]
Pigment VIsGCIGreen Chlorophyll IndexNIR/Green − 1[42]
GNDVIGreen Normalized Difference Vegetation Index(NIR − Green)/(NIR + Green)[43]
MCARI2Modified Chlorophyll Absorption Ratio Index 21.5 ∗ (2.5 ∗ (NIR − Red) − 1.3 ∗ (NIR − REG))/sqrt((2 ∗ NIR + 1)2 − (6 ∗ NIR − 5 ∗ sqrt(Red)))[39]
NDRENormalized Difference Red Edge Index(NIR − REG)/(NIR + REG)[44]
NGINormalized Green IndexGreen/(NIR + REG + Green)[45]
RENDVIRed Edge NDVI(REG − Red)/(REG + Red)[46]
RESRRed Edge Simple RatioREG/Red[47]
Water VIsNDWINormalized Difference Water Index(Green − NIR)/(Green + NIR)[48]
Soil VIsMSAVIModified Soil-Adjusted Vegetation Index(2 ∗ NIR + 1 − sqrt((2 ∗ NIR + 1)2 − 8 ∗ (NIR − Red)))/2[49]
SAVISoil-Adjusted Vegetation Index(NIR − Red)/(NIR + Red + 0.5) ∗ 1.5[50]
Table 3. Final selected parameters and their accuracies for different fine inversion models for LAI using the Fine-fusion data mode with 3 × 3 feature sampling window.
Table 3. Final selected parameters and their accuracies for different fine inversion models for LAI using the Fine-fusion data mode with 3 × 3 feature sampling window.
Modeln_EstimatorsMax_DepthLearning_RateMin_Samples_SplitR2MAE
(m2/m2)
RMSE
(m2/m2)
XGBoost2010.01-0.810.390.52
Decision Tree-2-110.800.390.52
AdaBoost100-0.11-0.860.370.44
Gradient Boosting3010.120.840.400.47
Random Forest14018-90.900.320.38
Table 4. Accuracy analysis of LAI fine inversion at different growth stages with different models.
Table 4. Accuracy analysis of LAI fine inversion at different growth stages with different models.
ModelJointingTasselingFillingAll Stages
R 2 RMSE
(m2/m2)
R 2 RMSE
(m2/m2)
R 2 RMSE
(m2/m2)
R 2 RMSE
(m2/m2)
Linear Regression0.70 0.69 0.56 0.38 0.60 0.47 0.830.48
Decision Tree0.79 0.38 0.57 0.32 0.61 0.44 0.800.52
AdaBoost0.90 0.23 0.70 0.26 0.60 0.45 0.860.44
Gradient Boosting0.74 0.44 0.77 0.26 0.67 0.39 0.840.47
XGBoost0.83 0.31 0.64 0.27 0.53 0.48 0.810.52
Random Forest0.87 0.28 0.86 0.15 0.62 0.42 0.900.38
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, Y.; Wang, H.; Yang, X.; Gao, X.; Yang, S.; Zhao, Y.; Li, S.; Zhang, X.; Liu, Z. Interpretable LAI Fine Inversion of Maize by Fusing Satellite, UAV Multispectral, and Thermal Infrared Images. Agriculture 2025, 15, 243. https://doi.org/10.3390/agriculture15030243

AMA Style

Yao Y, Wang H, Yang X, Gao X, Yang S, Zhao Y, Li S, Zhang X, Liu Z. Interpretable LAI Fine Inversion of Maize by Fusing Satellite, UAV Multispectral, and Thermal Infrared Images. Agriculture. 2025; 15(3):243. https://doi.org/10.3390/agriculture15030243

Chicago/Turabian Style

Yao, Yu, Hengbin Wang, Xiao Yang, Xiang Gao, Shuai Yang, Yuanyuan Zhao, Shaoming Li, Xiaodong Zhang, and Zhe Liu. 2025. "Interpretable LAI Fine Inversion of Maize by Fusing Satellite, UAV Multispectral, and Thermal Infrared Images" Agriculture 15, no. 3: 243. https://doi.org/10.3390/agriculture15030243

APA Style

Yao, Y., Wang, H., Yang, X., Gao, X., Yang, S., Zhao, Y., Li, S., Zhang, X., & Liu, Z. (2025). Interpretable LAI Fine Inversion of Maize by Fusing Satellite, UAV Multispectral, and Thermal Infrared Images. Agriculture, 15(3), 243. https://doi.org/10.3390/agriculture15030243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop