Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (11,900)

Search Parameters:
Keywords = Artificial Neural Networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2345 KiB  
Article
Personalized Predictions of Therapeutic Hypothermia Outcomes in Cardiac Arrest Patients with Shockable Rhythms Using Explainable Machine Learning
by Chien-Tai Hong, Oluwaseun Adebayo Bamodu, Hung-Wen Chiu, Wei-Ting Chiu, Lung Chan and Chen-Chih Chung
Diagnostics 2025, 15(3), 267; https://doi.org/10.3390/diagnostics15030267 - 23 Jan 2025
Abstract
Background: Therapeutic hypothermia (TH) represents a critical therapeutic intervention for patients with cardiac arrest, although treatment efficacy and prognostic factors may vary between individuals. Precise, personalized outcome predictions can empower better clinical decisions. Methods: In this multi-center retrospective cohort study involving nine medical [...] Read more.
Background: Therapeutic hypothermia (TH) represents a critical therapeutic intervention for patients with cardiac arrest, although treatment efficacy and prognostic factors may vary between individuals. Precise, personalized outcome predictions can empower better clinical decisions. Methods: In this multi-center retrospective cohort study involving nine medical centers in Taiwan, we developed machine learning algorithms to predict neurological outcomes in patients who experienced cardiac arrest with shockable rhythms and underwent TH. The study cohort comprised 209 patients treated between January 2014 and September 2019. The models were trained on patients’ pre-treatment characteristics collected during this study period. The optimal artificial neural network (ANN) model was interpretable using the SHapley Additive exPlanations (SHAP) method. Results: Among the 209 enrolled patients, 79 (37.80%) demonstrated favorable neurological outcomes at discharge. The ANN model achieved an area under the curve value of 0.9089 (accuracy = 0.8330, precision = 0.7984, recall = 0.7492, specificity = 0.8846) for outcome prediction. SHAP analysis identified vital predictive features, including the dose of epinephrine during resuscitation, diabetes status, body temperature at return of spontaneous circulation (ROSC), whether the cardiac arrest was witnessed, and diastolic blood pressure at ROSC. Using real-life case examples, we demonstrated how the ANN model provides personalized prognostic predictions tailored to individuals’ distinct profiles. Conclusion: Our machine learning approach delivers personalized forecasts of TH outcomes in cardiac arrest patients with shockable rhythms. By accounting for each patient’s unique health history and cardiac arrest event details, the ANN model empowers more precise risk stratification, tailoring clinical decision-making regarding TH prognostication and optimizing personalized treatment planning. Full article
Show Figures

Figure 1

37 pages, 3887 KiB  
Article
Reinforced NEAT Algorithms for Autonomous Rover Navigation in Multi-Room Dynamic Scenario
by Dhadkan Shrestha and Damian Valles
Abstract
This paper demonstrates the performance of autonomous rovers utilizing NeuroEvolution of Augmenting Topologies (NEAT) in multi-room scenarios and explores their potential applications in wildfire management and search and rescue missions. Simulations in three- and four-room scenarios were conducted over 100 to 10,000 generations, [...] Read more.
This paper demonstrates the performance of autonomous rovers utilizing NeuroEvolution of Augmenting Topologies (NEAT) in multi-room scenarios and explores their potential applications in wildfire management and search and rescue missions. Simulations in three- and four-room scenarios were conducted over 100 to 10,000 generations, comparing standard learning with transfer learning from a pre-trained single-room model. The task required rovers to visit all rooms before returning to the starting point. Performance metrics included fitness score, successful room visits, and return rates. The results revealed significant improvements in rover performance across generations for both scenarios, with transfer learning providing substantial advantages, particularly in early generations. Transfer learning achieved 32 successful returns after 10,000 generations for the three-room scenario compared to 34 with standard learning. In the four-room scenario, transfer learning achieved 32 successful returns. Heatmap analyses highlighted efficient navigation strategies, particularly around starting points and target zones. This study highlights NEAT’s adaptability to complex navigation problems, showcasing the utility of transfer learning. Additionally, it proposes the integration of NEAT with UAV systems and collaborative robotic frameworks for fire suppression, fuel characterization, and dynamic fire boundary detection, further strengthening its role in real-world emergency management. Full article
23 pages, 6562 KiB  
Article
Neural Network Prediction of Locomotive Engine Parameters Based on the Dung Beetle Optimization Algorithm and Multi-Objective Optimization of Engine Operating Parameters
by Aiqi Dong, Lijuan Liu, Chunce Zhao and Ying Guan
Sensors 2025, 25(3), 677; https://doi.org/10.3390/s25030677 - 23 Jan 2025
Abstract
Altitude has a significant impact on the power and emissions of diesel engines. This paper combines neural network prediction models with artificial intelligence-based multi-objective optimization algorithms to analyze the performance of internal combustion engines for plateau dual-source locomotives operating at different altitudes. The [...] Read more.
Altitude has a significant impact on the power and emissions of diesel engines. This paper combines neural network prediction models with artificial intelligence-based multi-objective optimization algorithms to analyze the performance of internal combustion engines for plateau dual-source locomotives operating at different altitudes. The study focuses on the altitude range based on the Laji Line and selects decision variables and output objectives that significantly affect diesel engine performance for joint optimization. First, the diesel engine is simulated and modeled using GT-Power to generate the required dataset. Then, a random sampling method is applied to generate a dataset of 400 operating points from the simulation model. The experimental results show that the neural network prediction model optimized by the DBO algorithm achieves correlation coefficients above 95%. Finally, the NSGA-II algorithm is used for multi-objective optimization. The optimization results indicate that the proposed intelligent optimization method significantly improves the performance of the diesel engine under different altitude conditions, confirming the effectiveness and potential of artificial intelligence optimization algorithms in diesel engine optimization. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

14 pages, 3703 KiB  
Article
Artificial Neural Network-Based Structural Analysis of 3D-Printed Polyethylene Terephthalate Glycol Tensile Specimens
by Athanasios Manavis, Anastasios Tzotzis, Lazaros Firtikiadis and Panagiotis Kyratsis
Abstract
Materials are a mainstay of both industry and everyday life. The manufacturing and processing of materials is a very important sector as it affects both the mechanical properties and the usage of the final products. In recent years, the increased use of 3D [...] Read more.
Materials are a mainstay of both industry and everyday life. The manufacturing and processing of materials is a very important sector as it affects both the mechanical properties and the usage of the final products. In recent years, the increased use of 3D printing and, by extension, its materials have caused the creation of gaps in terms of strength that require further scientific study. In this study, the influence of various printing parameters on 3D-printed specimens made of polyethylene terephthalate glycol (PETG) polymer was tested. More specifically, three printing parameters were selected—infill, speed, and type—with three different values each (50%, 70%, and 90%), (5 mm/s, 20 mm/s, and 35 mm/s) and (Grid, Rectilinear, and Wiggle). From the combinations of the three parameters and the three values, 27 different specimens were obtained and thus, 27 equivalent experiments were designed. The measurements were evaluated, and the process was modeled with the Artificial Neural Network (ANN) method, revealing a strong and robust prediction model for the tensile test, with the relative error being below 10%. Both infill density and infill pattern were identified as the most influential parameters, with the Wiggle type being the strongest pattern of all. Additionally, it was found that the infill density acts increasingly on the strength, whereas the printing speed acts decreasingly. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

12 pages, 6468 KiB  
Article
Artificial Neural Networks for the Simulation and Modeling of the Adsorption of Fluoride Ions with Layered Double Hydroxides
by Julio Cesar Estrada-Moreno, Eréndira Rendón-Lara, María de la Luz Jiménez-Núñez and Jacob Josafat Salazar Rábago
Abstract
Adsorption is a complex process since it is affected by multiple variables related to the physicochemical properties of the adsorbate, the adsorbent and the interface; therefore, to understand the adsorption process in batch systems, kinetics, isotherms empiric models are commonly used. On the [...] Read more.
Adsorption is a complex process since it is affected by multiple variables related to the physicochemical properties of the adsorbate, the adsorbent and the interface; therefore, to understand the adsorption process in batch systems, kinetics, isotherms empiric models are commonly used. On the other hand, artificial neural networks (ANNs) have proven to be useful in solving a wide variety of complex problems in science and engineering due to their combination of computational efficiency and precision in the results; for this reason, in recent years, ANNs have begun to be used for describing adsorption processes. In this work, we present an ANN model of the adsorption of fluoride ions in water with layered double hydroxides (LDHs) and its comparison with empirical kinetic adsorption models. LHD was synthesized and characterized using X-Ray diffraction, FT-Infrared spectroscopy, BET analyses and zero point of charge. Fluoride ion adsorption was evaluated under different experimental conditions, including contact time, initial pH and initial fluoride ion concentration. A total of 262 experiments were conducted, and the resulting data were used for training and testing the ANN model. The results indicate that the ANN can accurately forecast the adsorption conditions with a determination coefficient R2 of 0.9918. Full article
Show Figures

Graphical abstract

19 pages, 4584 KiB  
Article
Model for Impacts of Urban Water Blue Visual Index and Flow Velocity on Human Brain State and Its Practical Application
by Yiming Zhang, Xuezhou Zhu and Qingbin Li
Buildings 2025, 15(3), 339; https://doi.org/10.3390/buildings15030339 - 23 Jan 2025
Viewed by 74
Abstract
This study develops a predictive model to assess the impacts of urban water blue visual index (BVI) and flow velocity on human brain states using EEG and HRV data in virtual reality simulations. By integrating Gaussian process regression (GPR) and artificial neural networks [...] Read more.
This study develops a predictive model to assess the impacts of urban water blue visual index (BVI) and flow velocity on human brain states using EEG and HRV data in virtual reality simulations. By integrating Gaussian process regression (GPR) and artificial neural networks (ANN), the model accurately captures the relationships between BVI, flow velocities, and brain states, reflecting experimental observations with high precision. Applied across 31 provinces in China, the model effectively predicted regional brain state levels, aligning closely with the birthplace distribution of high-level talents, such as academicians and Changjiang scholars. These results highlight the model’s practical application in optimizing urban water features to enhance mental health, cognitive performance, and societal development. Full article
(This article belongs to the Special Issue Research on Intelligent Geotechnical Engineering)
Show Figures

Figure 1

16 pages, 10679 KiB  
Article
Evaluation of the Artificial Neural Networks—Dynamic Infrared Rain Rate near Real-Time (PDIR-Now) Satellite’s Ability to Monitor Annual Maximum Daily Precipitation in Mainland China
by Yanping Zhu, Gaosong Chang, Wenjiang Zhang, Jingyu Guo and Xiaodong Li
Water 2025, 17(3), 308; https://doi.org/10.3390/w17030308 - 23 Jan 2025
Viewed by 115
Abstract
As one of the countries with the most severe extreme climate disasters in the world, it is of great significance for China to scientifically understand the characteristics of extreme precipitation. The artificial neural network near-real-time dynamic infrared rainfall rate satellite precipitation data (PDIR-Now) [...] Read more.
As one of the countries with the most severe extreme climate disasters in the world, it is of great significance for China to scientifically understand the characteristics of extreme precipitation. The artificial neural network near-real-time dynamic infrared rainfall rate satellite precipitation data (PDIR-Now) is a global, long-term resource with diverse spatial resolutions, rich temporal scales, and broad spatiotemporal coverage, providing an important data source for the study of extreme precipitation. But its applicability and accuracy still need to be evaluated in specific applications. Based on the observation data of 824 surface meteorological stations in China, the correlation coefficient (R), relative deviation (RB), root mean square error (RMSE), and relative root mean square error (RRMSE) of quantitative statistical indicators were used to evaluate the annual maximum daily precipitation of PDIR-Now from 2000 to 2016 in this study, in order to explore the ability of PDIR-Now satellite precipitation products to monitor extreme precipitation in Chinese mainland. The results show that from the perspective of long-term series, the annual maximum daily precipitation of PDIR-Now has a good ability to monitor extreme precipitation across the country, and the R exceeds 0.6 in 65% of the years. The RMSE of different years is generally distributed between 40 and 60 mm, and in terms of time characteristics, the error of each year is relatively stable and does not fluctuate greatly with dry precipitation or abundant years. From the perspective of spatial characteristics, the distribution of RMSE is very regional, with the RMSE in the Qinghai–Tibet Plateau and Northwest China basically in the range of 0~20 mm, the Yunnan–Guizhou Plateau, the Sichuan Basin, Northeast China, and the central part of the study area in the range of 20~50 mm, and the RMSE in a few stations in the southeast coast greater than 80 mm. The RRMSE distribution of most sites is between 0 and 0.6, and the RRMSE distribution of a few sites is between 0.6 and 1.5. Generally, higher RRMSE values and larger errors are observed in the northwest and southeast coastal regions. Overall, PDIR-Now captures the regional characteristics of extreme precipitation in the study area, but it is underestimated in the wet season in humid and semi-humid regions and overestimated in the dry season in arid and semi-arid regions. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

31 pages, 1892 KiB  
Article
Optimizing Controlled-Resonance Acoustic Metamaterials with Perforated Plexiglass Disks, Honeycomb Structures, and Embedded Metallic Masses
by Giuseppe Ciaburro, Gino Iannace and Virginia Puyana Romero
Fibers 2025, 13(2), 11; https://doi.org/10.3390/fib13020011 - 22 Jan 2025
Viewed by 256
Abstract
Acoustic metamaterials offer new opportunities for controlling sound waves through engineered material configurations at the sub-wavelength scale. In this research, we present the optimization of a resonance-controlled acoustic metamaterial based on a sandwich structure composed of perforated plexiglass disks, honeycomb structures, and added [...] Read more.
Acoustic metamaterials offer new opportunities for controlling sound waves through engineered material configurations at the sub-wavelength scale. In this research, we present the optimization of a resonance-controlled acoustic metamaterial based on a sandwich structure composed of perforated plexiglass disks, honeycomb structures, and added metal masses. The innovative approach consists of integrating perforated plexiglass disks interspersed with honeycomb structures, which act as multiple and complex Helmholtz resonators, and adding metal masses to introduce resonances at specific frequencies. The metamaterial’s acoustic properties were experimentally characterized using an impedance tube (Kundt tube), allowing the measurement of the Sound Absorption Coefficient (SAC) over an expansive frequency selection. The results demonstrate a substantial enhancement in sound absorption at the target frequencies, demonstrating the effectiveness of the introduced resonances. Numerical simulations using an Artificial Neural Network (ANN) model in MATLAB environment were used to analyze the distribution of resonances and optimize the structural configuration. To effectively evaluate the acoustic properties of the metamaterial, various configurations were analyzed using perforated plexiglass disks combined with different layers of honeycombs arranged in a sandwich structure with a thickness ranging from 41 to 45 mm. A comparison of these configurations revealed a notable increase in the Sound Absorption Coefficient (SAC) when employing three layers of perforated plexiglass disks and adding masses to the first disk (about 14%). This study highlights the potential of resonance-controlled metamaterials for advanced applications in noise control and acoustic engineering. Full article
22 pages, 762 KiB  
Review
Clinical Applications of Artificial Intelligence (AI) in Human Cancer: Is It Time to Update the Diagnostic and Predictive Models in Managing Hepatocellular Carcinoma (HCC)?
by Mario Romeo, Marcello Dallio, Carmine Napolitano, Claudio Basile, Fiammetta Di Nardo, Paolo Vaia, Patrizia Iodice and Alessandro Federico
Diagnostics 2025, 15(3), 252; https://doi.org/10.3390/diagnostics15030252 - 22 Jan 2025
Viewed by 300
Abstract
In recent years, novel findings have progressively and promisingly supported the potential role of Artificial intelligence (AI) in transforming the management of various neoplasms, including hepatocellular carcinoma (HCC). HCC represents the most common primary liver cancer. Alarmingly, the HCC incidence is dramatically increasing [...] Read more.
In recent years, novel findings have progressively and promisingly supported the potential role of Artificial intelligence (AI) in transforming the management of various neoplasms, including hepatocellular carcinoma (HCC). HCC represents the most common primary liver cancer. Alarmingly, the HCC incidence is dramatically increasing worldwide due to the simultaneous “pandemic” spreading of metabolic dysfunction-associated steatotic liver disease (MASLD). MASLD currently constitutes the leading cause of chronic hepatic damage (steatosis and steatohepatitis), fibrosis, and liver cirrhosis, configuring a scenario where an HCC onset has been reported even in the early disease stage. On the other hand, HCC represents a serious plague, significantly burdening the outcomes of chronic hepatitis B (HBV) and hepatitis C (HCV) virus-infected patients. Despite the recent progress in the management of this cancer, the overall prognosis for advanced-stage HCC patients continues to be poor, suggesting the absolute need to develop personalized healthcare strategies further. In this “cold war”, machine learning techniques and neural networks are emerging as weapons, able to identify the patterns and biomarkers that would have normally escaped human observation. Using advanced algorithms, AI can analyze large volumes of clinical data and medical images (including routinely obtained ultrasound data) with an elevated accuracy, facilitating early diagnosis, improving the performance of predictive models, and supporting the multidisciplinary (oncologist, gastroenterologist, surgeon, radiologist) team in opting for the best “tailored” individual treatment. Additionally, AI can significantly contribute to enhancing the effectiveness of metabolomics–radiomics-based models, promoting the identification of specific HCC-pathogenetic molecules as new targets for realizing novel therapeutic regimens. In the era of precision medicine, integrating AI into routine clinical practice appears as a promising frontier, opening new avenues for liver cancer research and treatment. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging: 2nd Edition)
Show Figures

Figure 1

17 pages, 4465 KiB  
Article
A Complete Pipeline to Extract Temperature from Thermal Images of Pigs
by Rodania Bekhit and Inonge Reimert
Sensors 2025, 25(3), 643; https://doi.org/10.3390/s25030643 - 22 Jan 2025
Viewed by 283
Abstract
Using deep learning or artificial intelligence (AI) in research with animals is a new interdisciplinary area of research. In this study, we have explored the potential of thermal imaging and AI in pig research. Thermal cameras play a vital role in obtaining and [...] Read more.
Using deep learning or artificial intelligence (AI) in research with animals is a new interdisciplinary area of research. In this study, we have explored the potential of thermal imaging and AI in pig research. Thermal cameras play a vital role in obtaining and collecting a large amount of data, and AI has the capabilities of processing and extracting valuable information from these data. The amount of data collected using thermal imaging is huge, and automation techniques are therefore crucial to find a meaningful interpretation of the changes in temperature. In this paper, we present a complete pipeline to extract temperature automatically from a selected Region of Interest (ROI). This system consists of three stages: the first one checks whether the ROI is completely visible to observe the thermal temperature, and then the second stage uses an encoder–decoder structure of a convolution neural network to segment the ROI, if the condition was met at stage one. In the last stage, the maximum temperature is extracted and saved in an external file. The segmentation model showed good performance, with a mean Pixel Class accuracy of 92.3%, and a mean Intersection over Union of 87.1%. The extracted temperature observed by the model entirely matched the manually observed temperature. The system showed reliable results to be used independently without human intervention to determine the temperature in the selected ROI in pigs. Full article
Show Figures

Graphical abstract

35 pages, 2304 KiB  
Review
Modernizing Neuro-Oncology: The Impact of Imaging, Liquid Biopsies, and AI on Diagnosis and Treatment
by John Rafanan, Nabih Ghani, Sarah Kazemeini, Ahmed Nadeem-Tariq, Ryan Shih and Thomas A. Vida
Int. J. Mol. Sci. 2025, 26(3), 917; https://doi.org/10.3390/ijms26030917 - 22 Jan 2025
Viewed by 348
Abstract
Advances in neuro-oncology have transformed the diagnosis and management of brain tumors, which are among the most challenging malignancies due to their high mortality rates and complex neurological effects. Despite advancements in surgery and chemoradiotherapy, the prognosis for glioblastoma multiforme (GBM) and brain [...] Read more.
Advances in neuro-oncology have transformed the diagnosis and management of brain tumors, which are among the most challenging malignancies due to their high mortality rates and complex neurological effects. Despite advancements in surgery and chemoradiotherapy, the prognosis for glioblastoma multiforme (GBM) and brain metastases remains poor, underscoring the need for innovative diagnostic strategies. This review highlights recent advancements in imaging techniques, liquid biopsies, and artificial intelligence (AI) applications addressing current diagnostic challenges. Advanced imaging techniques, including diffusion tensor imaging (DTI) and magnetic resonance spectroscopy (MRS), improve the differentiation of tumor progression from treatment-related changes. Additionally, novel positron emission tomography (PET) radiotracers, such as 18F-fluoropivalate, 18F-fluoroethyltyrosine, and 18F-fluluciclovine, facilitate metabolic profiling of high-grade gliomas. Liquid biopsy, a minimally invasive technique, enables real-time monitoring of biomarkers such as circulating tumor DNA (ctDNA), extracellular vesicles (EVs), circulating tumor cells (CTCs), and tumor-educated platelets (TEPs), enhancing diagnostic precision. AI-driven algorithms, such as convolutional neural networks, integrate diagnostic tools to improve accuracy, reduce interobserver variability, and accelerate clinical decision-making. These innovations advance personalized neuro-oncological care, offering new opportunities to improve outcomes for patients with central nervous system tumors. We advocate for future research integrating these tools into clinical workflows, addressing accessibility challenges, and standardizing methodologies to ensure broad applicability in neuro-oncology. Full article
(This article belongs to the Section Molecular Oncology)
Show Figures

Figure 1

15 pages, 2219 KiB  
Article
Unraveling Cyberbullying Dynamis: A Computational Framework Empowered by Artificial Intelligence
by Liliana Ibeth Barbosa-Santillán, Bertha Patricia Guzman-Velazquez, Ma. Teresa Orozco-Aguilera and Leticia Flores-Pulido
Information 2025, 16(2), 80; https://doi.org/10.3390/info16020080 - 22 Jan 2025
Viewed by 196
Abstract
Cyberbullying, which manifests in various forms, is a growing challenge on social media, mainly when it involves threats of violence through images, especially those featuring weapons. This study introduces a computational framework to identify such content using convolutional neural networks of weapon-related images. [...] Read more.
Cyberbullying, which manifests in various forms, is a growing challenge on social media, mainly when it involves threats of violence through images, especially those featuring weapons. This study introduces a computational framework to identify such content using convolutional neural networks of weapon-related images. By integrating artificial intelligence techniques with image analysis, our model detects visual patterns associated with violent threats, creating safer digital environments. The development of this work involved analyzing images depicting scenes with weapons carried by children or adolescents. Images were sourced from social media and spatial repositories. The statistics were processed through a 225-layer convolutional neural network, achieving an 86% accuracy rate in detecting weapons in images featuring children, adolescents, and young adults. The classifier method reached an accuracy of 17.86% with training over only 25 epochs and a recall of 14.2%. Weapon detection is a complex task due to the variability in object exposures and differences in weapon shapes, sizes, orientations, colors, and image capture methods. Segmentation issues and the presence of background objects or people further compound this complexity. Our study demonstrates that convolutional neural networks can effectively detect weapons in images, making them a valuable tool in addressing cyberbullying involving weapon imagery. Detecting such content contributes to creating safer digital environments for young people. Full article
Show Figures

Figure 1

20 pages, 4780 KiB  
Article
Large-Space Laser Tracking Attitude Combination Measurement Using Backpropagation Algorithm Based on Neighborhood Search
by Ziyue Zhao, Zhi Xiong, Zhengnan Guo, Hao Zhang, Xiangyu Li, Zhongsheng Zhai and Weihu Zhou
Appl. Sci. 2025, 15(3), 1083; https://doi.org/10.3390/app15031083 - 22 Jan 2025
Viewed by 286
Abstract
Large-space high-precision attitude dynamic measurement technology has urgent application needs in large equipment manufacturing fields, such as aerospace, rail transportation, automobiles, and ships. In this paper, taking laser tracking equipment as the base station, a backpropagation algorithm based on neighborhood search is proposed, [...] Read more.
Large-space high-precision attitude dynamic measurement technology has urgent application needs in large equipment manufacturing fields, such as aerospace, rail transportation, automobiles, and ships. In this paper, taking laser tracking equipment as the base station, a backpropagation algorithm based on neighborhood search is proposed, which is applied to the fusion of multi-source information for solving the dynamic attitude angle. This paper firstly established a mathematical model of laser tracking attitude dynamic measurement based on IMU and CCD multi-sensor, designed a 6-11-3 back propagation network structure and algorithm flow, and realized the prediction of attitude angle through model training. Secondly, the method based on neighborhood search realizes the determination of the optimal training target value of the model, of which the MSE has a 34% reduction compared to the IMU determination method. Finally, the experimental platform is set up with the precision rotary table as the motion carrier to verify the effectiveness of the research method in this paper. The experimental results show that with the neighborhood-based backpropagation algorithm, the measurement results have a higher data update rate and a certain inhibition effect on the error accumulation of IMU. The absolute value of the system angle error can be less than 0.4° within 8 m and 0–50°, with an angle update rate of 100 Hz. The research method in this paper can be applied to the dynamic measurement of laser tracking attitude angles, which provides a new reference for the angle measurement method based on the fusion of multi-source information. Full article
Show Figures

Figure 1

21 pages, 5352 KiB  
Article
A Trustworthy Framework for Skin Cancer Detection Using a CNN with a Modified Attention Mechanism
by Su Myat Thwin, Hyun-Seok Park and Soo Hyun Seo
Appl. Sci. 2025, 15(3), 1067; https://doi.org/10.3390/app15031067 - 22 Jan 2025
Viewed by 269
Abstract
The early and accurate detection of skin cancer can reduce mortality rates and improve patient outcomes, but requires advanced diagnostics. The integration of artificial intelligence (AI) into healthcare enables the precise and timely detection of skin cancer. However, significant challenges remain including the [...] Read more.
The early and accurate detection of skin cancer can reduce mortality rates and improve patient outcomes, but requires advanced diagnostics. The integration of artificial intelligence (AI) into healthcare enables the precise and timely detection of skin cancer. However, significant challenges remain including the difficulty in differentiating visually similar skin conditions and the limitations of diverse, representative datasets. In this study, we proposed DCAN-Net, a novel deep-learning framework designed for the early detection of skin cancer. The model leverages an efficient backbone architecture optimized for capturing diverse skin patterns, utilizing carefully tuned parameters to enhance the discrimination capabilities and refine the extracted features using modified attention modules, thereby prioritizing relevant foreground information while minimizing background noise. Furthermore, the Grad-CAM explainable AI method was employed, highlighting the most salient features within dermatoscopic images. The fused optimal feature representations significantly enhanced the dermatoscopic image analysis. When evaluated on the HAM10000 dataset, DCAN-Net achieved a precision, recall, F1-score, and accuracy of 97.00%, 97.57%, 97.10%, and 97.57%, respectively. Moreover, the application of advanced data augmentation techniques mitigated data imbalance issues and reduced false-positive and false-negative rates across the original and augmented datasets. These findings demonstrate the potential of DCAN-Net for improving clinical outcomes and advancing AI-driven skin cancer diagnostics. Full article
Show Figures

Figure 1

23 pages, 4806 KiB  
Article
SAT-GATv2: A Dynamic Attention-Based Graph Neural Network for Solving Boolean Satisfiability Problem
by Wenjing Chang and Wenlong Liu
Electronics 2025, 14(3), 423; https://doi.org/10.3390/electronics14030423 - 22 Jan 2025
Viewed by 272
Abstract
We propose SAT-GATv2, a graph neural network (GNN)-based model designed to solve the Boolean satisfiability problem (SAT) through graph-based deep learning techniques. SAT-GATv2 transforms SAT formulas into graph structures, leveraging message-passing neural networks (MPNNs) to propagate local information and dynamic attention mechanisms (GATv2) [...] Read more.
We propose SAT-GATv2, a graph neural network (GNN)-based model designed to solve the Boolean satisfiability problem (SAT) through graph-based deep learning techniques. SAT-GATv2 transforms SAT formulas into graph structures, leveraging message-passing neural networks (MPNNs) to propagate local information and dynamic attention mechanisms (GATv2) to accurately capture inter-node dependencies and enhance node feature representations. Unlike traditional heuristic-driven SAT solvers, SAT-GATv2 adopts a data-driven approach, learning structural patterns directly from graph representations and providing a complementary framework to existing methods. Experimental results demonstrate that SAT-GATv2 achieves an accuracy improvement of 1.75–5.51% over NeuroSAT on challenging random 3-SAT(n) instances, highlighting its effectiveness in handling difficult problem distributions, and outperforms other GNN-based models on SR(n) datasets, showcasing its scalability and adaptability. Ablation studies validate the critical roles of MPNNs and GATv2 in improving prediction accuracy and scalability. While SAT-GATv2 does not yet surpass CDCL-based solvers in overall performance, it addresses their limitations in scalability and adaptability to complex instances, offering an efficient graph-based alternative for tackling larger and more complex SAT problems. This study establishes a foundation for integrating deep learning with combinatorial optimization, emphasizing its potential for applications in artificial intelligence and operations research. Full article
Show Figures

Figure 1

Back to TopTop