Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (17,232)

Search Parameters:
Keywords = cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4670 KiB  
Article
Resource Allocation Optimization Model for Computing Continuum
by Mihaela Mihaiu, Bogdan-Costel Mocanu, Cătălin Negru, Alina Petrescu-Niță and Florin Pop
Mathematics 2025, 13(3), 431; https://doi.org/10.3390/math13030431 (registering DOI) - 27 Jan 2025
Abstract
The exponential growth of Internet of Things (IoT) devices has led to massive volumes of data, challenging traditional centralized processing paradigms. The cloud–edge continuum computing model has emerged as a promising solution to address this challenge, offering a distributed approach to data processing [...] Read more.
The exponential growth of Internet of Things (IoT) devices has led to massive volumes of data, challenging traditional centralized processing paradigms. The cloud–edge continuum computing model has emerged as a promising solution to address this challenge, offering a distributed approach to data processing and management and improved performances in terms of the overhead and latency of the communication network. In this paper, we present a novel resource allocation optimization solution in cloud–edge continuum architectures designed to support multiple heterogeneous mobile clients that run a set of applications in a 5G-enabled environment. Our approach is structured across three layers, mist, edge, and cloud, and introduces a set of innovative resource allocation models that addresses the limitations of the traditional bin-packing optimization problem in IoT systems. The proposed solution integrates task offloading and resource allocation strategies designed to optimize energy consumption while ensuring compliance with Service Level Agreements (SLAs) by minimizing resource consumption. The evaluation of our proposed solution shows a longer period of active time for edge servers because of the lower energy consumption. These results indicate that the proposed solution is viable and a sustainability model that prioritizes energy efficiency in alignment with current climate concerns. Full article
(This article belongs to the Special Issue Distributed Systems: Methods and Applications)
21 pages, 12847 KiB  
Article
Spatiotemporal Patterns of Chlorophyll-a Concentration in a Hypersaline Lake Using High Temporal Resolution Remotely Sensed Imagery
by R. Douglas Ramsey, Soren M. Brothers, Melissa Cobo and Wayne A. Wurtsbaugh
Remote Sens. 2025, 17(3), 430; https://doi.org/10.3390/rs17030430 - 27 Jan 2025
Abstract
The Great Salt Lake (GSL) is the largest saline lake in the Western Hemisphere. It supports billion-dollar industries and recreational activities, and is a vital stopping point for migratory birds. However, little is known about the spatiotemporal variation of phytoplankton biomass in the [...] Read more.
The Great Salt Lake (GSL) is the largest saline lake in the Western Hemisphere. It supports billion-dollar industries and recreational activities, and is a vital stopping point for migratory birds. However, little is known about the spatiotemporal variation of phytoplankton biomass in the lake that supports these resources. Spectral reflectance provided by three remote sensing products was compared relative to their relationship with field measurements of chlorophyll a (Chl a). The MODIS product MCD43A4 with a 500 m spatial resolution provided the best overall ability to map the daily distribution of Chl a. The imagery indicated significant spatial variation in Chl a, with low concentrations in littoral areas and high concentrations in a nutrient-rich plume coming out of polluted embayment. Seasonal differences in Chl a showed higher concentrations in winter but lower in summer due to heavy brine shrimp (Artemia franciscana) grazing pressure. Twenty years of imagery revealed a 68% increase in Chl a, coinciding with a period of declining lake levels and increasing local human populations, with potentially major implications for the food web and biogeochemical cycling dynamics in the lake. The MCD43A4 daily cloud-free images produced by 16-day temporal composites of MODIS imagery provide a cost-effective and temporally dense means to monitor phytoplankton in the southern (47% surface area) portion of the GSL, but its remaining bays could not be effectively monitored due to shallow depths, and/or plankton with different pigments given extreme hypersaline conditions. Full article
Show Figures

Figure 1

20 pages, 9475 KiB  
Article
Cross-Domain Generalization for LiDAR-Based 3D Object Detection in Infrastructure and Vehicle Environments
by Peng Zhi, Longhao Jiang, Xiao Yang, Xingzheng Wang, Hung-Wei Li, Qingguo Zhou, Kuan-Ching Li and Mirjana Ivanović
Sensors 2025, 25(3), 767; https://doi.org/10.3390/s25030767 (registering DOI) - 27 Jan 2025
Abstract
In the intelligent transportation field, the Internet of Things (IoT) is commonly applied using 3D object detection as a crucial part of Vehicle-to-Everything (V2X) cooperative perception. However, challenges arise from discrepancies in sensor configurations between vehicles and infrastructure, leading to variations in the [...] Read more.
In the intelligent transportation field, the Internet of Things (IoT) is commonly applied using 3D object detection as a crucial part of Vehicle-to-Everything (V2X) cooperative perception. However, challenges arise from discrepancies in sensor configurations between vehicles and infrastructure, leading to variations in the scale and heterogeneity of point clouds. To address the performance differences caused by the generalization problem of 3D object detection models with heterogeneous LiDAR point clouds, we propose the Dual-Channel Generalization Neural Network (DCGNN), which incorporates a novel data-level downsampling and calibration module along with a cross-perspective Squeeze-and-Excitation attention mechanism for improved feature fusion. Experimental results using the DAIR-V2X dataset indicate that DCGNN outperforms detectors trained on single datasets, demonstrating significant improvements over selected baseline models. Full article
(This article belongs to the Special Issue Connected Vehicles and Vehicular Sensing in Smart Cities)
Show Figures

Figure 1

35 pages, 8022 KiB  
Review
Internet of Robotic Things: Current Technologies, Challenges, Applications, and Future Research Topics
by Jakub Krejčí, Marek Babiuch, Jiří Suder, Václav Krys and Zdenko Bobovský
Sensors 2025, 25(3), 765; https://doi.org/10.3390/s25030765 (registering DOI) - 27 Jan 2025
Viewed by 54
Abstract
This article focuses on the integration of the Internet of Things (IoT) and the Internet of Robotic Things, representing a dynamic research area with significant potential for industrial applications. The Internet of Robotic Things (IoRT) integrates IoT technologies into robotic systems, enhancing their [...] Read more.
This article focuses on the integration of the Internet of Things (IoT) and the Internet of Robotic Things, representing a dynamic research area with significant potential for industrial applications. The Internet of Robotic Things (IoRT) integrates IoT technologies into robotic systems, enhancing their efficiency and autonomy. The article provides an overview of the technologies used in IoRT, including hardware components, communication technologies, and cloud services. It also explores IoRT applications in industries such as healthcare, agriculture, and more. The article discusses challenges and future research directions, including data security, energy efficiency, and ethical issues. The goal is to raise awareness of the importance of IoRT and demonstrate how this technology can bring significant benefits across various sectors. Full article
Show Figures

Figure 1

23 pages, 2981 KiB  
Article
IoT-Driven Intelligent Scheduling Solution for Industrial Sewing Based on Real-RCPSP Model
by Huu Dang Quoc, Loc Nguyen The, Truong Bui Quang and Phuong Han Minh
Future Internet 2025, 17(2), 56; https://doi.org/10.3390/fi17020056 - 26 Jan 2025
Viewed by 371
Abstract
Applying IoT systems in industrial production allows data collection directly from production lines and factories. These data are aggregated, analyzed, and converted into reports to support manufacturers. Business managers can quickly and easily grasp the situation, making timely and effective management decisions. In [...] Read more.
Applying IoT systems in industrial production allows data collection directly from production lines and factories. These data are aggregated, analyzed, and converted into reports to support manufacturers. Business managers can quickly and easily grasp the situation, making timely and effective management decisions. In industrial sewing, IoT applications collect production data from sewing lines, especially from industrial sewing machines, and transmit that data to cloud-based systems. This allows businesses to analyze production situations, thereby improving management capacity. This article explores the implementation of IoT applications at industrial sewing enterprises, focusing on data collection during the production process and proposing a data structure to integrate this information into the company’s MIS system enterprise. In addition, the research also considers applying the Real-RCPSP problem to support businesses in planning automatic production operations. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
Show Figures

Figure 1

26 pages, 483 KiB  
Review
Quality of Experience-Oriented Cloud-Edge Dynamic Adaptive Streaming: Recent Advances, Challenges, and Opportunities
by Wei Wang, Xuekai Wei , Wei Tao, Mingliang Zhou  and Cheng Ji 
Symmetry 2025, 17(2), 194; https://doi.org/10.3390/sym17020194 - 26 Jan 2025
Viewed by 257
Abstract
The widespread adoption of dynamic adaptive streaming (DAS) has revolutionized the delivery of high-quality internet multimedia content by enabling dynamic streaming quality adjustments based on network conditions and playback capabilities. While numerous reviews have explored DAS technologies, this study differentiates itself by focusing [...] Read more.
The widespread adoption of dynamic adaptive streaming (DAS) has revolutionized the delivery of high-quality internet multimedia content by enabling dynamic streaming quality adjustments based on network conditions and playback capabilities. While numerous reviews have explored DAS technologies, this study differentiates itself by focusing on Quality of Experience (QoE)-oriented optimization in cloud-edge collaborative environments. Traditional DAS optimization often overlooks the asymmetry between cloud and edge nodes, where edge resources are typically constrained. This review emphasizes the importance of dynamic task and traffic allocation between cloud and edge nodes to optimize resource utilization and maintain system efficiency, ultimately improving QoE for end users. This comprehensive analysis explores recent advances in QoE-driven DAS optimization strategies, including streaming models, implementation mechanisms, and the integration of machine learning (ML) techniques. By contrasting ML-based DAS approaches with traditional methods, this study highlights the added value of intelligent algorithms in addressing modern streaming challenges. Furthermore, the review identifies emerging research directions, such as adaptive resource allocation and hybrid cloud-edge solutions, and underscores potential application areas for DAS in evolving multimedia systems. With the aim of serving as a valuable resource for researchers, practitioners, and decision-makers in addressing the challenges of resource-constrained edge environments and the need for QoE-centric solutions, this comprehensive analysis endeavors to promote the development, implementation, and application of DAS optimization. Acknowledging the crucial role of DAS optimization in improving the overall QoE for the end users, we hope to facilitate the continued advancement of video streaming experiences in the cloud-edge collaborated environment. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Embedded Systems)
16 pages, 421 KiB  
Article
The Gaussian-Drude Lens: A Dusty Plasma Model Applicable to Observations Across the Electromagnetic Spectrum
by Adam Rogers
Viewed by 223
Abstract
When radiation from a background source passes through a cloud of cold plasma, diverging lensing occurs if the source and observer are well-aligned. Unlike gravitational lensing, plasma lensing is dispersive, increasing in strength with wavelength. The Drude model is a generalization of cold [...] Read more.
When radiation from a background source passes through a cloud of cold plasma, diverging lensing occurs if the source and observer are well-aligned. Unlike gravitational lensing, plasma lensing is dispersive, increasing in strength with wavelength. The Drude model is a generalization of cold plasma, including absorbing dielectric dust described by a complex index of refraction. The Drude lens is only dispersive for wavelengths shorter than the dust characteristic scale (λλd). At sufficient photon energy, the dust particles act like refractive clouds. For longer wavelengths λλd, the optical properties of the Drude lens are constant, unique behavior compared to the predictions of the cold plasma lens. Thus, cold plasma lenses can be distinguished from Drude lenses using multi-band observations. The Drude medium extends the applicability of all previous tools, from gravitational and plasma lensing, to describe scattering phenomena in the X-ray regime. Full article
(This article belongs to the Special Issue Recent Advances in Gravitational Lensing and Galactic Dynamics)
Show Figures

Figure 1

34 pages, 8765 KiB  
Article
Short-Medium-Term Solar Irradiance Forecasting with a CEEMDAN-CNN-ATT-LSTM Hybrid Model Using Meteorological Data
by Max Camacho, Jorge Maldonado-Correa, Joel Torres-Cabrera, Sergio Martín-Martínez and Emilio Gómez-Lázaro
Appl. Sci. 2025, 15(3), 1275; https://doi.org/10.3390/app15031275 - 26 Jan 2025
Viewed by 308
Abstract
In recent years, the adverse effects of climate change have increased rapidly worldwide, driving countries to transition to clean energy sources such as solar and wind. However, these energies face challenges such as cloud cover, precipitation, wind speed, and temperature, which introduce variability [...] Read more.
In recent years, the adverse effects of climate change have increased rapidly worldwide, driving countries to transition to clean energy sources such as solar and wind. However, these energies face challenges such as cloud cover, precipitation, wind speed, and temperature, which introduce variability and intermittency in power generation, making integration into the interconnected grid difficult. To achieve this, we present a novel hybrid deep learning model, CEEMDAN-CNN-ATT-LSTM, for short- and medium-term solar irradiance prediction. The model utilizes complete empirical ensemble modal decomposition with adaptive noise (CEEMDAN) to extract intrinsic seasonal patterns in solar irradiance. In addition, it employs a hybrid encoder-decoder framework that combines convolutional neural networks (CNN) to capture spatial relationships between variables, an attention mechanism (ATT) to identify long-term patterns, and a long short-term memory (LSTM) network to capture short-term dependencies in time series data. This model has been validated using meteorological data in a more than 2400 masl region characterized by complex climatic conditions south of Ecuador. It was able to predict irradiance at 1, 6, and 12 h horizons, with a mean absolute error (MAE) of 99.89 W/m2 in winter and 110.13 W/m2 in summer, outperforming the reference methods of this study. These results demonstrate that our model represents progress in contributing to the scientific community in the field of solar energy in environments with high climatic variability and its applicability in real scenarios. Full article
(This article belongs to the Section Energy Science and Technology)
Show Figures

Figure 1

26 pages, 2002 KiB  
Article
Giant Aerosol Observations with Cloud Radar: Methodology and Effects
by Pilar Gumà-Claramunt, Fabio Madonna, Aldo Amodeo, Matthias Bauer-Pfundstein, Nikolaos Papagiannopoulos, Marco Rosoldi and Gelsomina Pappalardo
Remote Sens. 2025, 17(3), 419; https://doi.org/10.3390/rs17030419 - 26 Jan 2025
Viewed by 178
Abstract
In this study, we present an innovative methodology for the identification of giant aerosols using cloud radar. The methodology makes use of several insects studies in order to separate radar-derived atmospheric plankton signatures into the contributions of insects and giant aerosols. The methodology [...] Read more.
In this study, we present an innovative methodology for the identification of giant aerosols using cloud radar. The methodology makes use of several insects studies in order to separate radar-derived atmospheric plankton signatures into the contributions of insects and giant aerosols. The methodology is then applied to a 6-year-long cloud radar dataset in Potenza, South Italy. Forty giant aerosol events per year were found, which is in good agreement with the site’s climatological record. A sensitivity study on the effects of the giant aerosols on three atmospheric variables and under different atmospheric stability conditions showed that the presence of giant aerosols (a) increased the aerosol optical depth in all the atmospheric stability conditions, (b) decreased the Ångström exponent for the highest and lowest stability conditions and had the opposite effect for the intermediate stability condition, and (c) increased the accumulated precipitation in all the atmospheric conditions, especially in the most unstable ones. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
28 pages, 24222 KiB  
Article
TLSynth: A Novel Blender Add-On for Real-Time Point Cloud Generation from 3D Models
by Emiliano Pérez, Adolfo Sánchez-Hermosell and Pilar Merchán
Remote Sens. 2025, 17(3), 421; https://doi.org/10.3390/rs17030421 - 26 Jan 2025
Viewed by 242
Abstract
Point clouds are a crucial element in the process of scanning and reconstructing 3D environments, such as buildings or heritage sites. They allow for the creation of 3D models that can be used in a wide range of applications. In some cases, however, [...] Read more.
Point clouds are a crucial element in the process of scanning and reconstructing 3D environments, such as buildings or heritage sites. They allow for the creation of 3D models that can be used in a wide range of applications. In some cases, however, only the 3D model of an environment is available, and it is necessary to obtain point clouds with the same characteristics as those captured by a laser scanner. For instance, point clouds may be required for surveys, performance optimization, site scan planning, or validation of point cloud processing algorithms. This paper presents a new terrestrial laser scanner (TLS) simulator, designed as a Blender add-on, that produces synthetic point clouds from 3D models in real time. The simulator allows users to adjust a set of parameters to replicate real-world scanning conditions, such as noise generation, ensuring the synthetic point clouds closely mirror those produced by actual laser scanners. The target meshes may be derived from either a real-world scan or 3D designs created using design software. By replicating the spatial distributions and attributes of real laser scanner outputs and supporting real-time generation, the simulator serves as a valuable tool for scan planning and the development of synthetic point cloud repositories, advancing research and practical applications in 3D computer vision. Full article
Show Figures

Figure 1

15 pages, 12073 KiB  
Article
Classification of Hydrometeors During a Stratiform Precipitation Event in the Rainy Season of Liupanshan
by Nansong Feng, Zhiliang Shu and Yujun Qiu
Atmosphere 2025, 16(2), 132; https://doi.org/10.3390/atmos16020132 - 26 Jan 2025
Viewed by 163
Abstract
This study conducted a classification analysis of hydrometeor types during a typical stratiform mixed cloud precipitation event in the rainy season using data from the Liupan Mountains micro rain radar power spectra. The primary research findings are as follows: (1) Utilizing the RaProM [...] Read more.
This study conducted a classification analysis of hydrometeor types during a typical stratiform mixed cloud precipitation event in the rainy season using data from the Liupan Mountains micro rain radar power spectra. The primary research findings are as follows: (1) Utilizing the RaProM method synthesizes the information of particle falling velocity, equivalent radar reflection coefficient, particle scale characteristics at different stages, and the location of the bright zone in the zero-degree layer to classify hydrometeors during this precipitation process, and the results show that drizzle and raindrop distribution time periods do not match with the raindrop spectra and rain intensities observed by the DSG5 ground-based precipitation gauge. (2) Sensitivity experiments conducted on the RaProM method revealed that after modifying the discrimination thresholds for drizzle and raindrops, the distributions of drizzle and raindrops were more aligned with ground-based raindrop spectrum observations. Furthermore, these adjustments also showed better consistency with the radar reflectivity factor, Doppler velocity, and velocity spectrum width thresholds used by existing millimeter-wave cloud radars to discriminate between drizzle and raindrops. (3) Various kinds of hydrometeors show different vertical distribution characteristics in three precipitation stages: weak, strong, and weak. In the two weak precipitation stages, hydrometeors mainly existed in the form of snowflakes at altitudes above the zero-degree layer and in the form of drizzle at altitudes below the zero-degree layer. The vertical distribution disparity of hydrometeors between the mountain peak and base sites demonstrates that terrain significantly influences hydrometeors during the precipitation process. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

22 pages, 27292 KiB  
Article
Adversarial Robustness for Deep Learning-Based Wildfire Prediction Models
by Ryo Ide and Lei Yang
Viewed by 219
Abstract
Rapidly growing wildfires have recently devastated societal assets, exposing a critical need for early warning systems to expedite relief efforts. Smoke detection using camera-based Deep Neural Networks (DNNs) offers a promising solution for wildfire prediction. However, the rarity of smoke across time and [...] Read more.
Rapidly growing wildfires have recently devastated societal assets, exposing a critical need for early warning systems to expedite relief efforts. Smoke detection using camera-based Deep Neural Networks (DNNs) offers a promising solution for wildfire prediction. However, the rarity of smoke across time and space limits training data, raising model overfitting and bias concerns. Current DNNs, primarily Convolutional Neural Networks (CNNs) and transformers, complicate robustness evaluation due to architectural differences. To address these challenges, we introduce WARP (Wildfire Adversarial Robustness Procedure), the first model-agnostic framework for evaluating wildfire detection models’ adversarial robustness. WARP addresses inherent limitations in data diversity by generating adversarial examples through image-global and -local perturbations. Global and local attacks superimpose Gaussian noise and PNG patches onto image inputs, respectively; this suits both CNNs and transformers while generating realistic adversarial scenarios. Using WARP, we assessed real-time CNNs and Transformers, uncovering key vulnerabilities. At times, transformers exhibited over 70% precision degradation under global attacks, while both models generally struggled to differentiate cloud-like PNG patches from real smoke during local attacks. To enhance model robustness, we proposed four wildfire-oriented data augmentation techniques based on WARP’s methodology and results, which diversify smoke image data and improve model precision and robustness. These advancements represent a substantial step toward developing a reliable early wildfire warning system, which may be our first safeguard against wildfire destruction. Full article
Show Figures

Figure 1

18 pages, 949 KiB  
Article
Coupling Secret Sharing with Decentralized Server-Aided Encryption in Encrypted Deduplication
by Chuang Gan, Weichun Wang, Yuchong Hu, Xin Zhao, Shi Dun, Qixiang Xiao, Wei Wang and Huadong Huang
Appl. Sci. 2025, 15(3), 1245; https://doi.org/10.3390/app15031245 - 26 Jan 2025
Viewed by 190
Abstract
Outsourcing storage to the cloud can save storage costs and is commonly used in businesses. It should fulfill two major goals: storage efficiency and data confidentiality. Encrypted deduplication can achieve both goals via performing deduplication to eliminate the duplicate data within encrypted data. [...] Read more.
Outsourcing storage to the cloud can save storage costs and is commonly used in businesses. It should fulfill two major goals: storage efficiency and data confidentiality. Encrypted deduplication can achieve both goals via performing deduplication to eliminate the duplicate data within encrypted data. Traditional encrypted deduplication generates the encryption key on the client side, which poses a risk of offline brute-force cracking of the outsourced data. Server-aided encryption schemes have been proposed to strengthen the confidentiality of encrypted deduplication by distributing the encryption process to dedicated servers. Existing schemes rely on expensive cryptographic primitives to provide a decentralized setting on the dedicated servers for scalability. However, this incurs substantial performance slowdown and can not be applied in practical encrypted deduplication storage systems. In this paper, we propose a new decentralized server-aided encrypted deduplication approach for outsourced storage, called ECDedup, which leverages secret sharing to achieve secure and efficient key management. We are the first to use the coding matrix as the encryption key to couple the encryption and encoding processes in encrypted deduplication. We also propose a acceleration scheme to speed up the encryption process of our ECDedup. We prototype ECDedup in cloud environments, and our experimental results based on the real-world backup datasets show that ECDedup can improve the client throughput by up to 51.9% compared to the state-of-the-art encrypted deduplication schemes. Full article
(This article belongs to the Special Issue Application of Deep Learning and Big Data Processing)
Show Figures

Figure 1

25 pages, 1059 KiB  
Article
Digital Evolution in Nigerian Heavy-Engineering Projects: A Comprehensive Analysis of Technology Adoption for Competitive Edge
by John Aliu, Ayodeji Emmanuel Oke, Oluwatayo Timothy Jesudaju, Prince O. Akanni, Tolulope Ehbohimen and Oluwaseun Sunday Dosumu
Buildings 2025, 15(3), 380; https://doi.org/10.3390/buildings15030380 - 26 Jan 2025
Viewed by 215
Abstract
The fourth industrial revolution has introduced a range of digital technologies (DTs) that possess the potential to significantly enhance the operations and competitiveness of heavy-construction firms. Grounded in the Technology–Organization–Environment (TOE) Framework, the Resource-Based View (RBV) and the Diffusion of Innovation Theory (DOI), [...] Read more.
The fourth industrial revolution has introduced a range of digital technologies (DTs) that possess the potential to significantly enhance the operations and competitiveness of heavy-construction firms. Grounded in the Technology–Organization–Environment (TOE) Framework, the Resource-Based View (RBV) and the Diffusion of Innovation Theory (DOI), this study investigates the relationship between the adoption of digital technologies and the competitive edge (CE) of heavy-engineering firms. Specifically, this research seeks to assess how the adoption of DTs impacts four critical competitive-edge metrics: efficient resource management (CE1), real-time monitoring and control (CE2), data-driven decision-making (CE3) and improved collaboration and communication (CE4). A quantitative research approach was employed, using a structured questionnaire distributed to construction professionals in Lagos State, Nigeria. The principal results of the study revealed that firms adopting artificial intelligence (AI), cloud-based technology and the Internet of Things (IoT) exhibited significantly higher competitive-edge metrics compared to their counterparts. Notably, AI and cloud-based technology were found to have a particularly strong association with improved resource management, real-time monitoring, and decision-making processes. A major contribution of this research is the development of a DT-adoption model which can serve as a benchmarking tool for firms to assess their current adoption levels and identify areas for improvement. This model can also guide policymakers and regulators in developing strategies to encourage the integration of digital technologies within the heavy-construction industry. The originality of this study lies in its holistic approach, examining a broad spectrum of digital technologies and their collective impact on enhancing the competitive edge of construction firms. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

30 pages, 2330 KiB  
Article
A New Hybrid Improved Kepler Optimization Algorithm Based on Multi-Strategy Fusion and Its Applications
by Zhenghong Qian, Yaming Zhang, Dongqi Pu, Gaoyuan Xie, Die Pu and Mingjun Ye
Mathematics 2025, 13(3), 405; https://doi.org/10.3390/math13030405 - 26 Jan 2025
Viewed by 273
Abstract
The Kepler optimization algorithm (KOA) is a metaheuristic algorithm based on Kepler’s laws of planetary motion and has demonstrated outstanding performance in multiple test sets and for various optimization issues. However, the KOA is hampered by the limitations of insufficient convergence accuracy, weak [...] Read more.
The Kepler optimization algorithm (KOA) is a metaheuristic algorithm based on Kepler’s laws of planetary motion and has demonstrated outstanding performance in multiple test sets and for various optimization issues. However, the KOA is hampered by the limitations of insufficient convergence accuracy, weak global search ability, and slow convergence speed. To address these deficiencies, this paper presents a multi-strategy fusion Kepler optimization algorithm (MKOA). Firstly, the algorithm initializes the population using Good Point Set, enhancing population diversity. Secondly, Dynamic Opposition-Based Learning is applied for population individuals to further improve its global exploration effectiveness. Furthermore, we introduce the Normal Cloud Model to perturb the best solution, improving its convergence rate and accuracy. Finally, a new position-update strategy is introduced to balance local and global search, helping KOA escape local optima. To test the performance of the MKOA, we uses the CEC2017 and CEC2019 test suites for testing. The data indicate that the MKOA has more advantages than other algorithms in terms of practicality and effectiveness. Aiming at the engineering issue, this study selected three classic engineering cases. The results reveal that the MKOA demonstrates strong applicability in engineering practice. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms, 2nd Edition)
Show Figures

Figure 1

Back to TopTop