1. Introduction
Floods, which are among the most destructive natural disasters worldwide, occur frequently and have widespread impacts, posing serious threats to socioeconomic development, the ecological environment, and the safety of human life and property [
1]. Statistics show that between 1900 and 2023, out of 16,535 natural disasters worldwide, floods accounted for the highest proportion at 35% [
2], severely limiting the achievement of sustainable development goals. Therefore, how to accurately and efficiently extract flood inundation extents has become a core research topic in the fields of disaster prevention and emergency response [
3], with significant practical importance for optimizing not only disaster monitoring and early warning capabilities but also resource allocation.
With the rapid development of remote sensing technology, especially synthetic aperture radar (SAR) technology, SAR has been widely used in flood mapping [
4,
5]. SAR has the characteristic of active imaging, utilizing electromagnetic waves in the microwave frequency band, which can penetrate clouds, rain, fog, and smoke, in observations, achieving all-weather, all-day monitoring of the Earth’s surface, unaffected by illumination and weather conditions [
6,
7,
8]. There is a significant difference in the backscattering coefficient between water bodies and terrestrial features in SAR imagery; water bodies typically exhibit low backscatter and appear dark [
9], whereas terrestrial features exhibit higher backscatter due to their structure and material properties, which appear bright. This characteristic gives SAR imagery a unique advantage in extracting flood inundation extents. However, SAR imagery also has problems such as speckle noise, geometric distortions, and complex scattering mechanisms of ground objects, which increase the difficulty of image interpretation [
10].
With the diversification of remote sensing data acquisition methods, increasing amounts of multisource and multimodal data are being applied in flood monitoring. Dual-polarization SAR imagery can provide backscatter information under different polarization modes, such as horizontal–horizontal (HH), vertical–vertical (VV), horizontal–vertical (HV), and vertical–horizontal (VH) modes, enriching the characterization of the electromagnetic scattering properties of ground objects [
11]. A digital elevation model (DEM) provides information on terrain undulations and elevations, which helps analyze the accumulation and propagation paths of water flow [
12]. Surface water distribution data contain historical positions and extents of water bodies, providing valuable prior knowledge for flood risk assessment. Therefore, effectively integrating and utilizing these multisource heterogeneous data can fully leverage their complementary advantages, increasing the accuracy and reliability of flood inundation extent extraction [
13,
14].
Traditional methods based on threshold segmentation [
15], supervised classification [
16], unsupervised classification [
17], and object-oriented classification [
18] have many shortcomings, such as relying on manually set parameters and features, lacking adaptability, and requiring large amounts of labeled data. These methods, especially for analyzing SAR imagery with high noise, strong scattering, and complex terrestrial backgrounds, are prone to limitations from error accumulation and insufficient generalization ability, resulting in low extraction accuracy and poor robustness [
19,
20].
In the field of image segmentation, machine learning, such as support vector machines (SVMs) [
21,
22], random forests (RFs) [
23,
24], and k-nearest neighbors (k-NNs) [
25], has been widely applied as an important technology in remote sensing image segmentation. Machine learning methods can automatically learn and extract effective features from labeled data, achieving a high-precision classification and segmentation of ground objects. Compared with traditional methods, these algorithms have greater adaptability and generalizability and effectively handle high noise, complex backgrounds, and diverse types of ground objects in remote sensing images.
The rapid development of deep learning technology has greatly promoted the advancement of remote sensing image analysis recently [
26]. In particular, the development of semantic segmentation models based on convolutional neural networks (CNNs) and fully connected networks (FCNs) has significantly advanced remote sensing image segmentation [
27], with various architectures emerging. Among them, U-Net, as a classic fully convolutional network architecture, is widely used in remote sensing image analysis because of its symmetric encoder–decoder structure and multiscale feature fusion mechanism [
28]. U-Net combines high-resolution features from the encoder with upsampled features from the decoder through skip connections, preserving low-level spatial details while integrating high-level semantic information [
29]. This design significantly increases the performance of the model in image segmentation tasks and provides methodological references for the development of remote sensing image segmentation technology. U-Net and its variants have been widely used in remote sensing image analysis tasks [
30,
31,
32,
33]. However, as the network depth and complexity increase, traditional U-Net has certain limitations in capturing global features and long-range dependencies, and its parameter count and computational overhead also increase significantly, which affects the training efficiency and practical applications of the model in resource-constrained environments [
34,
35,
36].
To address the shortcomings of U-Net in global feature modeling and parameter efficiency, researchers have been continuously exploring new network architectures and optimization strategies. In recent years, several improved models based on U-Net have been proposed to increase its feature representation capability and adaptability to complex tasks. For example, attention U-Net [
37] introduces an attention mechanism that can dynamically focus on key information regions in the image, increasing segmentation accuracy and robustness; U-Net3+ [
38] further improves U-Net by using full-scale connections and deep supervision mechanisms. By integrating feature information at different scales, U-Net3+ reduces the semantic gap between the encoder and the decoder, increasing the segmentation accuracy and stability of the model. Moreover, the introduction of transformer architectures has led to new breakthroughs in U-Net. TransUNet [
39] combines the self-attention mechanism of the vision transformer, increasing the ability of the model to capture global contextual information, but its large parameter size and high computational demand limit its widespread deployment in practical applications. Swin-Unet [
40] introduces the swin transformer into the U-Net structure, combining the global feature modeling capability of transformers with the efficient computational characteristics of convolutional neural networks through a hierarchical window attention mechanism, further optimizing segmentation performance. Compared with traditional deep learning models, these models have significant advantages in terms of parameter efficiency and computational complexity, enabling efficient flood extent extraction in resource-constrained environments.
Recently, studies have proposed an improved U-Net model based on the Kolmogorov–Arnold Network (KAN) [
41], known as the UKAN [
42]. Compared to traditional U-Net, the core innovation of UKAN lies in replacing standard convolutional layers with KAN layers, which achieve efficient nonlinear feature mapping through learnable one-dimensional activation functions. In the encoder stage, KAN layers dynamically capture the different features in images by adaptively adjusting the shape of the activation function, enhancing the model’s sensitivity to high noise and low-contrast regions. In the decoder stage, KAN layers, combined with skip connections, leverage their global feature modeling ability to effectively integrate multi-scale contextual information, reducing segmentation errors.
The KAN originates from the Kolmogorov–Arnold representation theorem [
43], which achieves efficient nonlinear mapping and global feature modeling by introducing learnable one-dimensional activation functions to replace the linear weight matrices in traditional neural networks. This theorem states that any multivariate continuous function defined on a bounded domain can be represented as a finite composition of several univariate continuous functions. The core significance of this theorem lies in transforming the complexity of high-dimensional functions into a combination of univariate functions, making the approximation and computation of multivariate functions more feasible.
While this theorem has important theoretical implications, it is considered impractical in real-world applications, owing to the complexity of the required univariate functions (which may be non-smooth or even fractal). Therefore, the design of KAN is aimed at overcoming this limitation by introducing learnable one-dimensional activation functions to replace the linear weight matrices in multilayer perceptrons (MLPs), significantly reducing the number of model parameters and increasing parameter efficiency. For the MLPs, the number of parameters in the linear layers is proportional to the product of the input and output dimensions, resulting in exponential growth in the parameters as the network depth increases. In contrast, the KAN parameters are concentrated mainly in one-dimensional activation functions, with the number of parameters linearly related to the input and output dimensions, greatly reducing the model complexity [
41].
The KAN also exhibits outstanding capabilities in nonlinear expression. MLPs rely on fixed activation functions (such as ReLU and Sigmoid) to introduce nonlinearity, but these functions have limited forms and struggle to fully capture complex nonlinear relationships in data. In contrast, KAN uses learnable and smooth one-dimensional activation functions that can adaptively adjust function shapes to better fit the nonlinear characteristics of the data. This flexibility allows the KAN to have stronger model representation and generalization performance to address high-dimensional complex data.
KAN has already demonstrated unique technological value in the field of remote sensing image processing. Previous studies have applied the KAN to hyperspectral image classification, and hybrid models based on KANs have achieved classification accuracy that surpasses or is comparable to traditional models, such as CNNs and ViTs, with less training data [
44]. Further research has shown that by introducing DeepKAN and GLKAN modules based on the KAN, the accuracy of semantic segmentation in remote sensing images can be effectively improved, particularly showing higher precision and robustness in complex scenarios compared to existing methods [
45]. Additionally, a study has found the application of UKAN in crop field segmentation. The research indicates that UKAN outperforms traditional U-Net in terms of segmentation accuracy and computational efficiency, especially when handling complex relationships [
46]. However, the current application of the UKAN model in water segmentation, especially in flood inundation extraction, is limited. Flood inundation extraction is a complex remote sensing task, requiring the model to accurately identify water body regions and maintain high precision across various complex geographical environments. The advantage of the UKAN model lies in its ability to model complex nonlinear relationships. In particular, in complex flood scenarios, the KAN module can more accurately capture image boundary information, thereby improving the accuracy of flood region segmentation. Furthermore, UKAN can effectively handle features of varying scales and complexities through adaptive activation functions, which has the potential to enhance the model’s robustness and computational efficiency in flood extraction tasks.
On the basis of these considerations, we utilize the UKAN. This model introduces KAN layers in both the encoder and decoder to increase the adaptability of the model to complex scenes and its ability to capture detailed features. Specifically, the encoder uses convolutional layers to extract low-level features, followed by KAN layers for global feature modeling; the decoder gradually restores spatial resolution and improves feature fusion through skip connections and KAN layers. This design enables the model to maintain parameter efficiency while possessing stronger feature representation capabilities, which is highly important for the accurate extraction of flood inundation extents.
In addition, to fully leverage the complementary information of multisource remote sensing data, we fuse dual-polarization SAR images, DEM, and surface water distribution data to construct a multisource input feature space. Dual-polarization SAR images capture the electromagnetic scattering characteristics of ground objects, DEM data capture the impact of terrain undulations on flood propagation, and surface water distribution data provide historical hydrological information. The fusion of multisource data helps the model to more comprehensively understand the complex mechanisms of flood occurrence, increasing the accuracy and robustness of inundation extent extraction.
In summary, the main contributions of this study include the following:
First-time use of an improved U-Net model based on the KAN (UKAN) for flood inundation extent extraction: by introducing KAN layers, the nonlinear mapping and global feature modeling capabilities of the model are increased, which increases the accuracy and efficiency of flood inundation extent extraction.
Fusion of multisource remote sensing data: by comprehensively utilizing dual-polarization SAR images, DEM, and surface water distribution data, the complementary advantages of each data source are fully exploited, which increases the adaptability of the model to complex scenes.
Validation on multiple datasets: experiments were conducted on the C2S-MS Floods and MMFlood datasets, and the results show that the UKAN model outperforms traditional methods on various evaluation metrics, which verifies its effectiveness and superiority.
New methods and insights for flood disaster monitoring: the efficiency and accuracy of the model have important practical significance for optimizing not only flood disaster monitoring and early warning capabilities but also optimizing emergency responses.
4. Results
4.1. Comparative Analysis of Different Models
The performance results of various models on the two datasets are presented in
Table 2. First, on the C2S-MS flood dataset, the UKAN model has significant advantages. Its IoU reaches 87.95%, whereas it reaches 84.46% for the traditional U-Net, 84.33% for Trans U-Net, and 84.46% for Attention U-Net, representing an improvement of approximately 3.5 percentage points. This enhancement indicates that UKAN has a stronger ability to accurately delineate flood regions and can more effectively identify complex flood boundaries. Further analysis reveals that the precision and recall of UKAN reach 93.87% and 94.33%, respectively, which are significantly higher than those of the other models (90.41%, 90.33%, and 90.87% (precision) and 92.61%, 92.56%, and 92.13% (recall), respectively). In terms of the F1 score, UKAN achieves 93.55%, outperforming U-Net and Trans U-Net by 91.48% and 91.41%, respectively. These results indicate that the UKAN model excels in recognizing flood boundaries and details, effectively reducing false positives and false negatives, thereby improving the overall accuracy and reliability of flood inundation extent extraction.
For the MMFlood dataset, the UKAN model also achieves excellent performance. The IoU of the UKAN reaches 78.31%, whereas it reaches 75.24% for U-Net, 76.93% for Trans U-Net, and 77.66% for Attention U-Net, reflecting an improvement of approximately three percentage points. This result shows that UKAN can maintain efficient segmentation capabilities even in more complex and diverse flood scenarios. In terms of precision, UKAN achieves 86.27%, which is slightly higher than that of Attention U-Net (86.02%). Although the recall of UKAN of 92.17% is slightly lower than that of Trans U-Net (92.91%), it still remains at a high level, ensuring a comprehensive coverage of flood areas. In terms of the F1 score, UKAN attains 87.75%, once again better than the F1 scores of the other models (85.75%, 86.86%, and 87.30%), further validating its advantages in overall performance.
4.2. Visualization Results Analysis
To further assess the performance differences among the models in the task of flood inundation extent extraction, we conducted a visual comparison of the segmentation results on the C2S-MS Floods and MMFlood datasets, as shown in
Figure 6 and
Figure 7.
For the C2S-MS Floods dataset, the segmentation results of all the models are generally consistent. Since the flood regions in this dataset are relatively prominent and the image quality is good, all the models can accurately identify flood inundation extents. The segmentation results show that the models perform similarly in terms of boundary recognition and regional integrity, with few false detections and omissions. This finding indicates that on this dataset, all the models can effectively extract flood inundation areas, and the performance differences are not significant.
However, on the MMFlood dataset, the segmentation results of the models exhibit significant differences. The visualization results show that the UKAN model outperforms the other models on this dataset. The UKAN model can more accurately capture the details of flood regions, with clearer segmentation boundaries and higher regional coherence. In contrast, other models may exhibit blurred edges, loss of detail, or misclassifications in some complex areas, leading to segmentation results that deviate from the ground truth.
As shown in
Figure 8, in some simple cases, although all four models failed to accurately predict the inundation extent, the comparative results indicate that UKAN and U-Net performed relatively poorly, while Attention U-Net and Trans U-Net showed superior performance. This phenomenon may be related to the models’ feature extraction capabilities when facing simpler flood scenarios. Specifically, in cases where the inundation extent is simple and the boundaries are blurred, the model needs to capture the local changes more precisely. However, UKAN and U-Net may be more inclined to rely on global features, and in these simple scenarios, they failed to effectively extract sufficient local details, leading to a decline in prediction accuracy.
We compared the TP, TN, FP, and FN rates on the C2S-MS Floods and MMFlood datasets, as shown in
Figure 9 and
Figure 10.
For the C2S-MS Floods dataset, the classification performance differences among the models are minimal. Since the flood images in this dataset are relatively simple, with clear distinctions between flood and non-flood regions, all the models can segment the flood areas well. However, in some edge regions, the FP rate and FN rate of the UKAN model are slightly higher than those of other classical models. This may be because the UKAN model tends to capture more complex feature relationships when processing edge details, resulting in slight overfitting in simple scenes. Nevertheless, this difference has a minimal impact on the overall performance, and the UKAN model still maintains high accuracy.
In complex flood image segmentation tasks, especially on the MMFlood dataset, the advantages of the UKAN model are more pronounced. The MMFlood dataset contains more complex flood scenes, diverse land cover types, and more complex environmental interferences, increasing the demands on the generalizability and robustness of the model. For this dataset, the UKAN model effectively reduces the FP rate and FN rate, especially showing significant improvement in the recognition of non-flood regions. This finding indicates that the UKAN model can more accurately distinguish between flood and non-flood areas with complex backgrounds, reducing the number of false detections and omissions. The improved ability of the UKAN model to recognize non-flood regions in the MMFlood dataset significantly reduces the misclassification of non-flood areas as flood areas (lowering the FP). This may be attributed to the effective fusion of global and local features and the full utilization of multisource data by the UKAN model, enabling it to more accurately capture feature differences in complex scenes.
4.3. Ablation Study
As previously mentioned, the original intent of the KAN is to overcome the limitations of MLPs in high-dimensional nonlinear mapping and global feature modeling. Traditional MLP layers, while playing a role in feature mapping, rely on the combination of linear weight matrices and activation functions, resulting in many parameters, low computational efficiency, and difficulty in effectively capturing complex nonlinear relationships in input data. When high-dimensional remote sensing images are processed, these issues can limit the performance and generalizability of the model.
To validate the advantages of KAN layers in increasing model performance and parameter efficiency, we designed ablation experiments using MLP layers embedded in a U-shaped structure as the baseline model for comparison. By replacing MLP layers with KAN layers and gradually increasing the number of KAN layers, we can evaluate the actual improvement effect of KAN layers on the model in the task of flood inundation extent extraction. Existing research shows that the performance reaches the best balance when KAN layers are set to three, and further increasing the number of layers leads to diminishing returns in performance improvement [
42]. Therefore, this study sets the maximum number of KAN layers to three to compare the differences in structural impacts. This comparison helps in understanding the contributions of KAN layers in feature representation, nonlinear modeling, and global feature capture, verifying the effectiveness of their theoretical advantages in practical applications. The specific results are shown in
Table 3.
The results show that replacing the MLP layers in the baseline model with a single KAN layer (Baseline + KAN layer) increases the IoU from 82.49% to 86.10%, with significant improvements in precision and the F1 score. This finding indicates that even a single KAN layer can markedly enhance the feature capturing capabilities of the model. When the number of KAN layers is further increased to two (Baseline + 2 KAN layers), the IoU continues to increase to 86.63%, but the improvement margin diminishes. The precision slightly decreases to 91.73%, whereas the recall increases to 93.88%. This finding suggests that adding more KAN layers can increase the ability of the model to identify flood regions but requires balancing the changes in precision.
When two KAN layers are directly employed and the MLP structure is removed from the baseline model (2KAN layers), the IoU increases to 86.89% and the recall significantly improves to 95.23%, indicating a substantial reduction in the miss rate when flood regions are captured. Increasing to three KAN layers (3KAN layers), the IoU reaches the highest value of 87.08%, with the precision and F1 score further improving to 93.15% and 93.05%, respectively. This trend shows that KAN layers indeed improve the model’s performance, and while adding more KAN layers can continuously enhance model performance, the improvement gradually levels off. Therefore, in model design, a balance must be considered between performance gains and model complexity. In practical applications, adding KAN layers should also take into account the computational burden of the model. Although more KAN layers can further improve model performance, they also increase computational complexity, which is particularly significant when processing large-scale remote sensing image data.
4.4. Complexity Analysis of Different Models
Furthermore, we compared the segmentation performance and computational complexity of different models on the MMFlood dataset, as shown in
Figure 11.
The UKAN model has a parameter count of 37.54 M, which is significantly lower than those of Trans U-Net and Attention U-Net. Compared with traditional U-Net, the parameter count of UKAN is slightly greater, but the performance improvement is more significant. In contrast, although Attention U-Net introduces an attention mechanism and improves segmentation performance, its parameter count and computational load increase substantially—the parameter count is nearly four times that of UKAN, and the number of GFLOPs is approximately ten times greater—making it less suitable for deployment in resource-constrained environments. Owing to its use of the transformer structure, Trans U-Net also suffers from large parameter sizes and high computational complexity. While the traditional U-Net has lower parameter counts and computational demands, its performance is clearly inferior to that of UKAN. This finding demonstrates that UKAN achieves high performance while balancing parameter efficiency and computational efficiency, offering greater practical application value. In terms of training time, UKAN requires 181.41 s per epoch, which is significantly higher than other models. This difference primarily stems from the complex KAN layer structure introduced in UKAN. While KAN layers, by using learnable activation functions, effectively enhance the model’s performance and feature modeling capabilities, they also increase the computational burden during training.
5. Discussion
5.1. Performance Analysis of the UKAN Model
The UKAN model used in this study exhibits outstanding performance in the extraction of flood inundation extents, primarily because of the introduction of the KAN layers. Compared with the traditional U-Net and its improved variants, the KAN layers use learnable one-dimensional activation functions to replace the linear weight matrices in conventional neural networks, improving the nonlinear mapping and global feature modeling capabilities of the model. This design allows the UKAN model to more effectively capture complex nonlinear relationships and deep associations among features in remote sensing imagery, increasing the accuracy of flood area recognition.
When processing high-dimensional multisource heterogeneous remote sensing data, the UKAN model has higher parameter efficiency and computational efficiency. The ablation study results show that as the number of KAN layers increases, the model performance steadily improves, whereas the parameter count and computational complexity do not significantly increase. Compared with MLPs, KAN layers reduce the number of model parameters and lower the computational overhead, enabling the UKAN model to achieve efficient flood extent extraction in resource-constrained environments. Although training time is increased, the UKAN model achieves a good balance between computational efficiency and performance, ensuring high accuracy in flood extraction while maintaining resource efficiency. This makes it particularly well-suited for rapid response flood disaster monitoring, where timely and accurate results are critical in resource-constrained environments.
5.2. Model Adaptability on Different Datasets
The UKAN achieved excellent performance on both the C2S-MS Floods and MMFlood datasets, showing an especially strong advantage on the broader coverage MMFlood dataset, which includes more flood events. The MMFlood dataset covers more extensive areas and encompasses a larger number of flood events, posing higher demands on the model’s generalization ability and robustness. Nevertheless, the UKAN model was able to maintain high accuracy on this dataset, fully demonstrating its robust adaptability when dealing with complex variable environments.
This result reflects the advantages of KAN layers in capturing global features and complex nonlinear relationships, enabling the model to more accurately distinguish between flood and non-flood regions and reducing the number of false positives and false negatives. The visualization results further validate the superiority of the UKAN model in complex scenarios, with clearer segmentation boundaries and more precise detail capture. This is crucial for improving the accuracy and reliability of flood inundation extent extraction, especially for variable environments and complex terrains in practical applications.
5.3. Significance for Flood Disaster Monitoring
The outstanding performance of the UKAN model has important practical significance for flood disaster monitoring and emergency response. Higher extraction accuracy and reliability mean that information on flood inundation extents can be obtained more accurately and promptly, providing solid data support for disaster prevention, mitigation, and resource allocation. Especially in sudden flood disasters, efficient and accurate flood extent extraction helps increase early warning capabilities, support decision-making, and reduce casualties and property losses.
Moreover, by integrating the UKAN model with real-time remote sensing data, it can become part of a real-time flood monitoring platform. For example, during a disaster event, by combining cloud computing and edge computing technologies, UKAN can process SAR data from satellites and drones in real time, rapidly generate maps of flood extent, and deliver them to relevant agencies in the affected areas within a short period. Such a system not only increases the efficiency of disaster response but also provides timely flood alerts to the public, helping to reduce casualties and property damage. This offers new possibilities for building efficient and real-time flood monitoring systems, promoting the widespread application of remote sensing technology in disaster prevention and mitigation fields.
5.4. Limitations and Challenges of the Model
Despite the excellent performance of the UKAN model in this study, several limitations need attention. First, although the introduction of KAN layers improves model performance, it also increases model complexity, increasing the demands on training and optimization, which results in longer training times. Second, in certain situations, the model still exhibits recognition errors, particularly when flood areas are relatively simple. In these cases, UKAN may fail to capture the precise inundation extent, possibly because the model is overly complex and cannot effectively extract features in simpler scenarios, resulting in overfitting or overlooking critical regional information. Moreover, the performance of the UKAN model under different SAR acquisition modes still requires further validation, and its application across varying climate and geographic regions may present certain limitations, making it prone to overfitting to the characteristics of specific datasets.
Future research can proceed in the following directions: optimizing the structure and training strategies of KAN layers to further improve model performance and training efficiency and enhancing UKAN’s capability for local feature modeling, particularly for fine-grained predictions in simpler scenarios. Moreover, transfer learning has great potential for improving performance in scenarios with limited data, and future studies could investigate how to apply transfer learning to flood monitoring tasks in different geographical regions or under varying climate conditions. As the demand for real-time flood monitoring continues to grow, integrating UKAN into cloud platforms can not only facilitate large-scale data processing but also, with the support of remote computing and storage resources, ensure efficient model operation in real-world scenarios. At the same time, edge computing can enable the deployment of the model closer to the data source, providing more immediate results for disaster response.
6. Conclusions
This study is the first to utilize the UKAN model, which is based on KAN layers, for flood inundation extent extraction. By introducing KAN layers, the model enhances its nonlinear mapping and global feature modeling capabilities, significantly improving the accuracy of flood area recognition. The experimental results show that the UKAN model achieves excellent performance on both the C2S-MS flood and MMFlood datasets, especially in terms of strong adaptability and robustness in complex scenarios.
Additionally, the UKAN model has significant advantages in terms of parameter efficiency and computational efficiency, enabling efficient flood extent extraction in resource-constrained environments. This approach has important practical significance for flood disaster monitoring and emergency response, providing reliable data support for disaster prevention, mitigation, and resource allocation.
The successful application of the UKAN model is not limited to research experiments; its potential in practical disaster management is also immense. By integrating with real-time remote sensing data, UKAN can quickly process SAR data from platforms such as satellites and drones during a disaster, generating flood extent maps to support real-time flood monitoring and early warning.
In the future, the UKAN model is expected to be further integrated into cloud-based disaster monitoring systems, leveraging remote computing and storage resources to support large-scale data processing. At the same time, it can be deployed on edge computing platforms for proximity-based processing, enabling rapid processing and real-time feedback. This will help enhance flood disaster early warning capabilities, reduce casualties and property losses, and provide essential support for the efficient operation of global disaster management and response systems.