Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (628)

Search Parameters:
Keywords = priori information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2094 KiB  
Article
Feasibility, Acceptability, and Outcomes of Project Rally: Pilot Study of a YMCA-Based Pickleball Program for Cancer Survivors
by Nathan H. Parker, Alexandre de Cerqueira Santos, Riley Mintrone, Kea Turner, Steven K. Sutton, Tracey O’Connor, Jeffrey Huang, Morgan Lael, Summer Cruff, Kari Grassia, Mart Theodore De Vera, Morgan Bean, Rachel Carmella, Susan T. Vadaparampil and Jennifer I. Vidrine
Healthcare 2025, 13(3), 256; https://doi.org/10.3390/healthcare13030256 - 28 Jan 2025
Viewed by 454
Abstract
Background: Physical activity helps cancer survivors ameliorate physiological and psychosocial effects of disease and treatments. However, few cancer survivors meet physical activity recommendations, with many facing barriers such as limited interest, enjoyment, and social support. It is critical to develop enjoyable and supportive [...] Read more.
Background: Physical activity helps cancer survivors ameliorate physiological and psychosocial effects of disease and treatments. However, few cancer survivors meet physical activity recommendations, with many facing barriers such as limited interest, enjoyment, and social support. It is critical to develop enjoyable and supportive physical activity programs to improve well-being among the growing population of cancer survivors. Pickleball is increasingly popular due to its unique combination of physical activity, friendly competition, and social interaction, making it a promising strategy to increase and sustain physical activity in cancer survivorship. Objective: We examined feasibility, acceptability, and preliminary outcomes in a single-arm pilot study of Project Rally, a YMCA-based pickleball program for adult cancer survivors. Results: Twenty-one cancer survivors and seven family or friend partners enrolled in Project Rally with a targeted program duration of 3–7 months. All programming and study assessments occurred at a single YMCA with coaching and supervision from a YMCA exercise trainer and certified pickleball coach. Feasibility and acceptability were strong and met a priori targets for recruitment, retention, intervention adherence, and ratings of program aspects. Participants demonstrated significant increases in physical activity and improvements in aspects of fitness, physical functioning, and social support. Conclusion: These results will inform further development of the Project Rally program to increase physical activity and improve cancer survivorship outcomes, including efforts to expand the program’s scale and reach more survivors via community-based delivery. Full article
Show Figures

Figure 1

11 pages, 841 KiB  
Article
Detecting and Explaining Long-Term Trends in a Weed Community in a Biennial Cereal–Legume Rotation
by Jose L. Gonzalez-Andujar and Irene Gonzalez-Garcia
Viewed by 310
Abstract
The technique of Dynamic Factor Analysis (DFA), which aims to reduce the dimensionality of time-series data, was utilized in order to model the changes over time in eight different long-time-series weeds (26 years) growing in a biennial cereal–legume rotation. The aim of the [...] Read more.
The technique of Dynamic Factor Analysis (DFA), which aims to reduce the dimensionality of time-series data, was utilized in order to model the changes over time in eight different long-time-series weeds (26 years) growing in a biennial cereal–legume rotation. The aim of the present study was to determine the existence of long-term trends in a weed community and to identify the factors that determine them. A common trend was extracted that captured the main signal of abundance over time, indicating latent influences affecting all species. Canonical correlation analysis showed strong associations between the common trend and specific weed species, suggesting differential responses to this latent factor. Local (temperature and precipitation) and global weather factors (North Atlantic Oscillation (NAO)) were considered as explanatory variables to explain the common trend. The local weather variables considered did not play a significant role in explaining the commonly observed trend. Conversely, NAO showed a significant relationship with the weed community, indicating its potential role in shaping long-term weed dynamics. DFA was found to be useful for studying the variability in multivariate weed time-series without the need for detailed a priori information on the underlying mechanisms governing weed population dynamics. Overall, this study provided valuable insights into the long-term drivers of weed dynamics and set the stage for future research in this area. Full article
(This article belongs to the Section Weed Science and Weed Management)
Show Figures

Figure 1

22 pages, 1097 KiB  
Article
Efficient AOA Estimation and NLOS Signal Utilization for LEO Constellation-Based Positioning Using Satellite Ephemeris Information
by Junqi Guo and Yang Wang
Appl. Sci. 2025, 15(3), 1080; https://doi.org/10.3390/app15031080 - 22 Jan 2025
Viewed by 456
Abstract
As large-scale low Earth orbit (LEO) constellations continue to expand, the potential of their signal strength for positioning applications should be fully leveraged. For high-precision angle of arrival (AOA) estimation, current spectrum search algorithms are computationally expensive. To address this, we propose a [...] Read more.
As large-scale low Earth orbit (LEO) constellations continue to expand, the potential of their signal strength for positioning applications should be fully leveraged. For high-precision angle of arrival (AOA) estimation, current spectrum search algorithms are computationally expensive. To address this, we propose a method that downscales the 2D joint spectrum search algorithm by incorporating satellite ephemeris a priori information. The proposed algorithm efficiently and accurately determines the azimuth and elevation angles of NLOS (non-line-of-sight) signals. Furthermore, an NLOS virtual satellite construction method is introduced for integrating NLOS satellite data into the positioning system using previously estimated azimuth and elevation angles. Simulation experiments, conducted with a uniform planar array antenna in environments containing both LOS (line-of-sight) and NLOS signals, demonstrate the effectiveness of the proposed solution. The results show that the azimuth determination algorithm reduces computational complexity without sacrificing accuracy, while the NLOS virtual satellite construction method significantly enhances positioning accuracy in NLOS environments. The geometric dilution of precision (GDOP) improved significantly, decreasing from values exceeding 10 to an average of less than 1.42. Full article
Show Figures

Figure 1

25 pages, 5316 KiB  
Article
Aircraft System Identification Using Multi-Stage PRBS Optimal Inputs and Maximum Likelihood Estimator
by Muhammad Fawad Mazhar, Muhammad Wasim, Manzar Abbas, Jamshed Riaz and Raees Fida Swati
Viewed by 464
Abstract
A new method to discover open-loop, unstable, longitudinal aerodynamic parameters, using a ‘two-stage optimization approach’ for designing optimal inputs, and with an application on the fighter aircraft platform, has been presented. System identification of supersonic aircraft requires formulating optimal inputs due to the [...] Read more.
A new method to discover open-loop, unstable, longitudinal aerodynamic parameters, using a ‘two-stage optimization approach’ for designing optimal inputs, and with an application on the fighter aircraft platform, has been presented. System identification of supersonic aircraft requires formulating optimal inputs due to the extremely limited maneuver time, high angles of attack, restricted flight conditions, and the demand for an enhanced computational effect. A pre-requisite of the parametric model identification is to have a priori aerodynamic parameter estimates, which were acquired using linear regression and Least Squares (LS) estimation, based upon simulated time histories of outputs from heuristic inputs, using an F-16 Flight Dynamic Model (FDM). In the ‘first stage’, discrete-time pseudo-random binary signal (PRBS) inputs were optimized using a minimization algorithm, in accordance with aircraft spectral features and aerodynamic constraints. In the ‘second stage’, an innovative concept of integrating the Fisher Informative Matrix with cost function based upon D-optimality criteria and Crest Factor has been utilized to further optimize the PRBS parameters, such as its frequency, amplitude, order, and periodicity. This unique optimum design also solves the problem of non-convexity, model over-parameterization, and misspecification; these are usually caused by the use of traditional heuristic (doublets and multistep) optimal inputs. After completing the optimal input framework, parameter estimation was performed using Maximum Likelihood Estimation. A performance comparison of four different PRBS inputs was made as part of our investigations. The model performance was validated by using statistical metrics, namely the following: residual analysis, standard errors, t statistics, fit error, and coefficient of determination (R2). Results have shown promising model predictions, with an accuracy of more than 95%, by using a Single Sequence Band-limited PRBS optimum input. This research concludes that, for the identification of the decoupled longitudinal Linear Time Invariant (LTI) aerodynamic model of supersonic aircraft, optimum PRBS shows better results than the traditional frequency sweeps, such as multi-sine, doublets, square waves, and impulse inputs. This work also provides the ability to corroborate control and stability derivatives obtained from Computational Fluid Dynamics (CFD) and wind tunnel testing. This further refines control law design, dynamic analysis, flying qualities assessments, accident investigations, and the subsequent design of an effective ground-based training simulator. Full article
(This article belongs to the Special Issue Flight Dynamics, Control & Simulation (2nd Edition))
Show Figures

Figure 1

27 pages, 6085 KiB  
Article
A Multi-Model Polynomial-Based Tracking Method for Targets with Complex Maneuvering Patterns
by Pikun Wang, Ling Wu, Junfei Xu and Faxing Lu
Viewed by 461
Abstract
In the absence of a priori knowledge about target motion characteristics, the task of tracking complex maneuvering targets remains challenging. A multi-model polynomial-based complex target tracking method is presented to address this issue. Observation sequences of varying lengths are fitted by time polynomials [...] Read more.
In the absence of a priori knowledge about target motion characteristics, the task of tracking complex maneuvering targets remains challenging. A multi-model polynomial-based complex target tracking method is presented to address this issue. Observation sequences of varying lengths are fitted by time polynomials of different orders, which are used to create a set of target motion models. Subsequently, the multi-model framework is employed to track maneuvering targets with uncertain motion characteristics. To verify the effectiveness of the suggested approach, three datasets were created with kinematic equation, the gazebo platform and real watercrafts. Based on the above three datasets, the proposed method is compared with classical multi-model methods and a deep learning method. Theoretical analysis and experimental results reveal that, in the lack of a priori information of target maneuvering features, the tracking error of the proposed method can be reduced by 12.5~30% compared with the traditional MM method. Moreover, the proposed method is able to overcome the problem of accuracy degradation caused by model misalignment and parameter tuning faced by the deep learning based methods. Full article
Show Figures

Figure 1

20 pages, 8117 KiB  
Article
Enhancing the Transformer Model with a Convolutional Feature Extractor Block and Vector-Based Relative Position Embedding for Human Activity Recognition
by Xin Guo, Young Kim, Xueli Ning and Se Dong Min
Sensors 2025, 25(2), 301; https://doi.org/10.3390/s25020301 - 7 Jan 2025
Viewed by 375
Abstract
The Transformer model has received significant attention in Human Activity Recognition (HAR) due to its self-attention mechanism that captures long dependencies in time series. However, for Inertial Measurement Unit (IMU) sensor time-series signals, the Transformer model does not effectively utilize the a priori [...] Read more.
The Transformer model has received significant attention in Human Activity Recognition (HAR) due to its self-attention mechanism that captures long dependencies in time series. However, for Inertial Measurement Unit (IMU) sensor time-series signals, the Transformer model does not effectively utilize the a priori information of strong complex temporal correlations. Therefore, we proposed using multi-layer convolutional layers as a Convolutional Feature Extractor Block (CFEB). CFEB enables the Transformer model to leverage both local and global time series features for activity classification. Meanwhile, the absolute position embedding (APE) in existing Transformer models cannot accurately represent the distance relationship between individuals at different time points. To further explore positional correlations in temporal signals, this paper introduces the Vector-based Relative Position Embedding (vRPE), aiming to provide more relative temporal position information within sensor signals for the Transformer model. Combining these innovations, we conduct extensive experiments on three HAR benchmark datasets: KU-HAR, UniMiB SHAR, and USC-HAD. Experimental results demonstrate that our proposed enhancement scheme substantially elevates the performance of the Transformer model in HAR. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

23 pages, 9410 KiB  
Article
Application of Reduced Order Surrogate Models in Compatible Determination of Material Properties Profiles by Eddy Current Method
by Volodymyr Y. Halchenko, Ruslana Trembovetska, Volodymyr Tychkov, Viacheslav Kovtun and Nataliia Tychkova
Viewed by 464
Abstract
A number of computer experiments have investigated the effectiveness in terms of accuracy of the method for simultaneously determining the distributions of electrical conductivity and magnetic permeability in the subsurface zone of planar conductive objects when modeling the process of eddy-current measurement testing [...] Read more.
A number of computer experiments have investigated the effectiveness in terms of accuracy of the method for simultaneously determining the distributions of electrical conductivity and magnetic permeability in the subsurface zone of planar conductive objects when modeling the process of eddy-current measurement testing by surface probes. The method is based on the use of surrogate optimization, which involves the use of a high-performance neural network proxy-model of probe by means of a deep learning as part of the target quadratic function. The surrogate model acts as a carrier and storage of a priori information about the object and takes into account the influence of all the main factors essential in the formation of the probe output signal. The problems of the surrogate model’s cumbersomeness and mitigation of the “curse of dimensionality” effect are solved by applying techniques for reducing the dimensionality of the design space based on the PCA algorithm. We investigated options for compromise solutions regarding the dimensionality of the PCA-space and the accuracy of obtaining the desired material properties profiles by the optimization method. The results of modeling the inverse measurement problem indicate a fairly high accuracy of profile reconstruction. Full article
Show Figures

Figure 1

21 pages, 6954 KiB  
Article
Disturbance Observer-Based Dynamic Surface Control for Servomechanisms with Prescribed Tracking Performance
by Xingfa Zhao, Wenhe Liao, Tingting Liu, Dongyang Zhang and Yumin Tao
Mathematics 2025, 13(1), 172; https://doi.org/10.3390/math13010172 - 6 Jan 2025
Viewed by 426
Abstract
The critical design challenge for a class of servomechanisms is to reject unknown dynamics (including internal uncertainties and external disturbances) and achieve the prescribed performance of the tracking error. To get rid of the influence of unknown dynamics, an extended state observer (ESO) [...] Read more.
The critical design challenge for a class of servomechanisms is to reject unknown dynamics (including internal uncertainties and external disturbances) and achieve the prescribed performance of the tracking error. To get rid of the influence of unknown dynamics, an extended state observer (ESO) is employed to estimate system states and total unknown dynamics and does not require a priori information of the known dynamic. Meanwhile, an improved prescribed performance function is presented to guarantee the transient performance of the tracking error (e.g., the overshoot, convergence rate, and the steady state error). Consequently, a modified dynamic surface control strategy is designed based on the estimations of the ESO and error constraints. The stability of the proposed control strategy is demonstrated using Lyapunov theory. Finally, some simulation results based on a turntable servomechanism show that the proposed method is effective, and it has a better control effect and stronger anti-disturbance ability compared with the traditional control method. Full article
(This article belongs to the Section C2: Dynamical Systems)
Show Figures

Figure 1

18 pages, 1623 KiB  
Article
Enhanced Stochastic Models for VLBI Invariant Point Estimation and Axis Offset Analysis
by Chang-Ki Hong and Tae-Suk Bae
Remote Sens. 2025, 17(1), 43; https://doi.org/10.3390/rs17010043 - 26 Dec 2024
Viewed by 445
Abstract
The accuracy and stability of Very Long Baseline Interferometry (VLBI) systems are essential for maintaining global geodetic reference frames such as the International Terrestrial Reference Frame (ITRF). This study focuses on the precise determination of the VLBI Invariant Point (IVP) and the detection [...] Read more.
The accuracy and stability of Very Long Baseline Interferometry (VLBI) systems are essential for maintaining global geodetic reference frames such as the International Terrestrial Reference Frame (ITRF). This study focuses on the precise determination of the VLBI Invariant Point (IVP) and the detection of antenna axis offset. Ground-based surveys were conducted at the Sejong Space Geodetic Observatory using high-precision instruments, including total station, to measure slant distances, as well as horizontal and vertical angles from fixed pillars to reflectors attached to the VLBI instrument. The reflectors comprised both prisms and reflective sheets to enhance redundancy and data reliability. A detailed stochastic model incorporating variance component estimation was employed to manage the varying precision of the observations. The analysis revealed significant measurement variability, particularly in slant distance measurements involving prisms. Iterative refinement of the variance components improved the reliability of the IVP and antenna axis offset estimates. The study identified an antenna axis offset of 5.6 mm, which was statistically validated through hypothesis testing, confirming its significance at a 0.01 significance level. This is a significance level corresponding to approximately a 2.576 sigma threshold, which represents a 99% confidence level. This study highlights the importance of accurate stochastic modeling in ensuring the precision and reliability of the estimated VLBI IVP and antenna axis offset. Additionally, the results can serve as a priori information for VLBI data analysis. Full article
Show Figures

Figure 1

17 pages, 2730 KiB  
Article
Redefining Contextual and Boundary Synergy: A Boundary-Guided Fusion Network for Medical Image Segmentation
by Yu Chen, Yun Wu, Jiahua Wu, Xinxin Zhang, Dahan Wang and Shunzhi Zhu
Electronics 2024, 13(24), 4986; https://doi.org/10.3390/electronics13244986 - 18 Dec 2024
Viewed by 562
Abstract
Medical image segmentation plays a crucial role in medical image processing, focusing on the automated extraction of regions of interest (such as organs, lesions, etc.) from medical images. This process supports various clinical applications, including diagnosis, surgical planning, and treatment. In this paper, [...] Read more.
Medical image segmentation plays a crucial role in medical image processing, focusing on the automated extraction of regions of interest (such as organs, lesions, etc.) from medical images. This process supports various clinical applications, including diagnosis, surgical planning, and treatment. In this paper, we introduce a Boundary-guided Context Fusion U-Net (BCF-UNet), a novel approach designed to tackle a critical shortcoming in current methods: the inability to effectively integrate boundary information with semantic context. The BCF-UNet introduces a Adaptive Multi-Frequency Encoder (AMFE), which uses multi-frequency analysis inspired by the Wavelet Transform (WT) to capture both local and global features efficiently. The Adaptive Multi-Frequency Encoder (AMFE) decomposes images into different frequency components and adapts more effectively to boundary texture information through a learnable activation function. Additionally, we introduce a new multi-scale feature fusion module, the Atten-kernel Adaptive Fusion Module (AKAFM), designed to integrate deep semantic information with shallow texture details, significantly bridging the gap between features at different scales. Furthermore, each layer of the encoder sub-network integrates a Boundary-aware Pyramid Module (BAPM), which utilizes a simple and effective method and combines it with a priori knowledge to extract multi-scale edge features to improve the accuracy of boundary segmentation. In BCF-UNet, semantic context is used to guide edge information extraction, enabling the model to more effectively comprehend and identify relationships among various organizational structures. Comprehensive experimental evaluations on two datasets demonstrate that the proposed BCF-UNet achieves superior performance compared to existing state-of-the-art methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 325 KiB  
Article
Quantum Collapse and Computation in an Everett Multiverse
by Fabrizio Tamburini and Ignazio Licata
Entropy 2024, 26(12), 1068; https://doi.org/10.3390/e26121068 - 9 Dec 2024
Viewed by 1025
Abstract
The mathematical representation of the universe consists of sequences of symbols, rules and operators containing Gödel’s undecidable propositions: information and its manipulation, also with Turing Machines. Classical information theory and mathematics, ideally independent from the medium used, can be interpreted realistically and objectively [...] Read more.
The mathematical representation of the universe consists of sequences of symbols, rules and operators containing Gödel’s undecidable propositions: information and its manipulation, also with Turing Machines. Classical information theory and mathematics, ideally independent from the medium used, can be interpreted realistically and objectively from their correspondence with quantum information, which is physical. Each representation of the universe and its evolution are, in any case, physical subsets of the universe, structured sets of observers and their complements in the universe made with spacetime events generated by local quantum measurements. Their description becomes a semantically closed structure without a global object-environment loss of decoherence as a von Neumann’s universal constructor with a semantical abstract whose structure cannot be decided deterministically a priori from an internal observer. In a semantically closed structure, the realization of a specific event that writes the semantical abstract of the constructor is a problem of finding “which way” for the evolution of the universe as a choice of the constructor’s state in a metastructure, like the many-world Everett scenario, from a specific result of any quantum measurement, corresponding to a Gödel undecidable proposition for an internal observer. Full article
(This article belongs to the Section Complexity)
17 pages, 4968 KiB  
Article
A Refined Spatiotemporal ZTD Model of the Chinese Region Based on ERA and GNSS Data
by Yongzhao Fan, Fengyu Xia, Zhimin Sha and Nana Jiang
Remote Sens. 2024, 16(23), 4515; https://doi.org/10.3390/rs16234515 - 2 Dec 2024
Viewed by 605
Abstract
Empirical tropospheric models can improve the performance of GNSS precise point positioning (PPP) by providing a priori zenith tropospheric delay (ZTD) information. However, existing models experience insufficient ZTD profile refinement, inadequate correction for systematic bias between the ZTD used in empirical modelling and [...] Read more.
Empirical tropospheric models can improve the performance of GNSS precise point positioning (PPP) by providing a priori zenith tropospheric delay (ZTD) information. However, existing models experience insufficient ZTD profile refinement, inadequate correction for systematic bias between the ZTD used in empirical modelling and the GNSS ZTD, and low time efficiency in model updating as more data become available. Therefore, a refined spatiotemporal empirical ZTD model was developed in this study on the basis of the fifth generation of European Centre for Medium-Range Weather Forecasts Reanalysis (ERA5) data and GNSS data. First, an ENM-R profile model was established by refining the modelling height of the negative exponential function model (ENM). Second, a regression kriging interpolation method was designed to model the systematic bias correction between the ERA5 ZTD and the GNSS ZTD. Last, the final refined ZTD model, ENM-RS, was established by introducing systematic bias correction into ENM-R. Experiments suggest that, compared with the ENM-R and GPT3 models, ENM-RS can effectively suppress systematic bias and improve ZTD modelling accuracy by 10~17%. To improve model update efficiency, the idea of updating an empirical model with sequential least square (SLSQ) adjustment is proposed for the first time. When ENM-RS is modelled via 12 years of ERA data, our method can reduce the time consumption to one-fifth of that of the traditional method. The benefits of our ENM-RS model are evaluated with PPP. The results show that relative to PPP solutions with ENM-R- and GPT3-derived ZTD constraints as well as no constraint, the ENM-RS ZTD constraint can decrease PPP convergence time by approximately 10~30%. Full article
Show Figures

Graphical abstract

36 pages, 14204 KiB  
Article
A Novel Algorithm for Precise Orbit Determination Using a Single Satellite Laser Ranging System Within a Single Arc for Space Surveillance and Tracking
by Dong-Gu Kim, Sang-Young Park and Eunji Lee
Aerospace 2024, 11(12), 989; https://doi.org/10.3390/aerospace11120989 - 29 Nov 2024
Viewed by 701
Abstract
A satellite laser ranging (SLR) system uses lasers to measure the range from ground stations to space objects with millimeter-level precision. Recent advances in SLR systems have increased their use in space surveillance and tracking (SST). The problem we are addressing, the precise [...] Read more.
A satellite laser ranging (SLR) system uses lasers to measure the range from ground stations to space objects with millimeter-level precision. Recent advances in SLR systems have increased their use in space surveillance and tracking (SST). The problem we are addressing, the precise orbit determination (POD) using one-dimensional range observations within a single arc, is challenging owing to infinite solutions because of limited observability. Therefore, general orbit determination algorithms struggle to achieve reasonable accuracy. The proposed algorithm redefines the cost value for orbit determination by leveraging residual tendencies in the POD process. The tendencies of residuals are quantified as R-squared values using Fourier series fitting to determine velocity vector information. The algorithm corrects velocity vector errors through the grid search method and least squares (LS) with a priori information. This approach corrects all six dimensions of the state vectors, comprising position and velocity vectors, utilizing only one dimension of the range observations. Simulations of three satellites using real data validate the algorithm. In all cases, the errors of the two-line element data (three-dimensional position error of 1 km and velocity error of 1 m/s, approximately) used as the initial values were reduced by tens of meters and the cm/s level, respectively. The algorithm outperformed the general POD algorithm using only the LS method, which does not effectively reduce errors. This study offers a more efficient and accurate orbit determination method, which improves the safety, cost efficiency, and effectiveness of space operations. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

15 pages, 3908 KiB  
Article
Efficient Trans-Dimensional Bayesian Inversion of C-Response Data from Geomagnetic Observatory and Satellite Magnetic Data
by Rongwen Guo, Shengqi Tian, Jianxin Liu, Yi-an Cui and Chuanghua Cao
Appl. Sci. 2024, 14(23), 10944; https://doi.org/10.3390/app142310944 - 25 Nov 2024
Viewed by 624
Abstract
To investigate deep Earth information, researchers often utilize geomagnetic observatories and satellite data to obtain the conversion function of geomagnetic sounding, C-response data, and employ traditional inversion techniques to reconstruct subsurface structures. However, the traditional gradient-based inversion produces geophysical models with artificial structure [...] Read more.
To investigate deep Earth information, researchers often utilize geomagnetic observatories and satellite data to obtain the conversion function of geomagnetic sounding, C-response data, and employ traditional inversion techniques to reconstruct subsurface structures. However, the traditional gradient-based inversion produces geophysical models with artificial structure constraint enforced subjectively to guarantee a unique solution. This method typically requires the model parameterization knowledge a priori (e.g., based on personal preference) without uncertainty estimation. In this paper, we apply an efficient trans-dimensional (trans-D) Bayesian algorithm to invert C-response data from observatory and satellite geomagnetic data for the electrical conductivity structure of the Earth’s mantle, with the model parameterization treated as unknown and determined by the data. In trans-D Bayesian inversion, the posterior probability density (PPD) represents a complete inversion solution, based on which useful inversion inferences about the model can be made with the requirement of high-dimensional integration of PPD. This is realized by an efficient reversible-jump Markov-chain Monte Carlo (rjMcMC) sampling algorithm based on the birth/death scheme. Within the trans-D Bayesian algorithm, the model parameter is perturbated in the principal-component parameter space to minimize the effect of inter-parameter correlations and improve the sampling efficiency. A parallel tempering scheme is applied to guarantee the complete sampling of the multiple model space. Firstly, the trans-D Bayesian inversion is applied to invert C-response data from two synthetic models to examine the resolution of the model structure constrained by the data. Then, C-response data from geomagnetic satellites and observatories are inverted to recover the global averaged mantle conductivity structure and the local mantle structure with quantitative uncertainty estimation, which is consistent with the data. Full article
Show Figures

Figure 1

25 pages, 3239 KiB  
Article
Machine Learning a Probabilistic Structural Equation Model to Explain the Impact of Climate Risk Perceptions on Policy Support
by Asim Zia, Katherine Lacasse, Nina H. Fefferman, Louis J. Gross and Brian Beckage
Sustainability 2024, 16(23), 10292; https://doi.org/10.3390/su162310292 - 25 Nov 2024
Viewed by 925
Abstract
While a flurry of studies and Integrated Assessment Models (IAMs) have independently investigated the impacts of switching mitigation policies in response to different climate scenarios, little is understood about the feedback effect of how human risk perceptions of climate change could contribute to [...] Read more.
While a flurry of studies and Integrated Assessment Models (IAMs) have independently investigated the impacts of switching mitigation policies in response to different climate scenarios, little is understood about the feedback effect of how human risk perceptions of climate change could contribute to switching climate mitigation policies. This study presents a novel machine learning approach, utilizing a probabilistic structural equation model (PSEM), for understanding complex interactions among climate risk perceptions, beliefs about climate science, political ideology, demographic factors, and their combined effects on support for mitigation policies. We use machine learning-based PSEM to identify the latent variables and quantify their complex interaction effects on support for climate policy. As opposed to a priori clustering of manifest variables into latent variables that is implemented in traditional SEMs, the novel PSEM presented in this study uses unsupervised algorithms to identify data-driven clustering of manifest variables into latent variables. Further, information theoretic metrics are used to estimate both the structural relationships among latent variables and the optimal number of classes within each latent variable. The PSEM yields an R2 of 92.2% derived from the “Climate Change in the American Mind” dataset (2008–2018 [N = 22,416]), which is a substantial improvement over a traditional regression analysis-based study applied to the CCAM dataset that identified five manifest variables to account for 51% of the variance in policy support. The PSEM uncovers a previously unidentified class of “lukewarm supporters” (~59% of the US population), different from strong supporters (27%) and opposers (13%). These lukewarm supporters represent a wide swath of the US population, but their support may be capricious and sensitive to the details of the policy and how it is implemented. Individual survey items clustered into latent variables reveal that the public does not respond to “climate risk perceptions” as a single construct in their minds. Instead, PSEM path analysis supports dual processing theory: analytical and affective (emotional) risk perceptions are identified as separate, unique factors, which, along with climate beliefs, political ideology, and race, explain much of the variability in the American public’s support for climate policy. The machine learning approach demonstrates that complex interaction effects of belief states combined with analytical and affective risk perceptions; as well as political ideology, party, and race, will need to be considered for informing the design of feedback loops in IAMs that endogenously feedback the impacts of global climate change on the evolution of climate mitigation policies. Full article
Show Figures

Figure 1

Back to TopTop