Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (16,300)

Search Parameters:
Keywords = feature selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1209 KiB  
Article
Electricity Consumption Forecasting: An Approach Using Cooperative Ensemble Learning with SHapley Additive exPlanations
by Eduardo Luiz Alba, Gilson Adamczuk Oliveira, Matheus Henrique Dal Molin Ribeiro and Érick Oliveira Rodrigues
Forecasting 2024, 6(3), 839-863; https://doi.org/10.3390/forecast6030042 (registering DOI) - 20 Sep 2024
Abstract
Electricity expense management presents significant challenges, as this resource is susceptible to various influencing factors. In universities, the demand for this resource is rapidly growing with institutional expansion and has a significant environmental impact. In this study, the machine learning models long short-term [...] Read more.
Electricity expense management presents significant challenges, as this resource is susceptible to various influencing factors. In universities, the demand for this resource is rapidly growing with institutional expansion and has a significant environmental impact. In this study, the machine learning models long short-term memory (LSTM), random forest (RF), support vector regression (SVR), and extreme gradient boosting (XGBoost) were trained with historical consumption data from the Federal Institute of Paraná (IFPR) over the last seven years and climatic variables to forecast electricity consumption 12 months ahead. Datasets from two campuses were adopted. To improve model performance, feature selection was performed using Shapley additive explanations (SHAP), and hyperparameter optimization was carried out using genetic algorithm (GA) and particle swarm optimization (PSO). The results indicate that the proposed cooperative ensemble learning approach named Weaker Separator Booster (WSB) exhibited the best performance for datasets. Specifically, it achieved an sMAPE of 13.90% and MAE of 1990.87 kWh for the IFPR–Palmas Campus and an sMAPE of 18.72% and MAE of 465.02 kWh for the Coronel Vivida Campus. The SHAP analysis revealed distinct feature importance patterns across the two IFPR campuses. A commonality that emerged was the strong influence of lagged time-series values and a minimal influence of climatic variables. Full article
(This article belongs to the Section Power and Energy Forecasting)
19 pages, 4789 KiB  
Article
Graph Neural Networks for Mesh Generation and Adaptation in Structural and Fluid Mechanics
by Ugo Pelissier, Augustin Parret-Fréaud, Felipe Bordeu and Youssef Mesri
Mathematics 2024, 12(18), 2933; https://doi.org/10.3390/math12182933 (registering DOI) - 20 Sep 2024
Abstract
The finite element discretization of computational physics problems frequently involves the manual generation of an initial mesh and the application of adaptive mesh refinement (AMR). This approach is employed to selectively enhance the accuracy of resolution in regions that encompass significant features throughout [...] Read more.
The finite element discretization of computational physics problems frequently involves the manual generation of an initial mesh and the application of adaptive mesh refinement (AMR). This approach is employed to selectively enhance the accuracy of resolution in regions that encompass significant features throughout the simulation process. In this paper, we introduce Adaptnet, a Graph Neural Networks (GNNs) framework for learning mesh generation and adaptation. The model is composed of two GNNs: the first one, Meshnet, learns mesh parameters commonly used in open-source mesh generators, to generate an initial mesh from a Computer Aided Design (CAD) file; while the second one, Graphnet, learns mesh-based simulations to predict the components of an Hessian-based metric to perform anisotropic mesh adaptation. Our approach is tested on structural (Deforming plate–Linear elasticity) and fluid mechanics (Flow around cylinders–steady-state Stokes) problems. Our findings demonstrate the model’s ability to precisely predict the dynamics of the system and adapt the mesh as needed. The adaptability of the model enables learning resolution-independent mesh-based simulations during training, allowing it to scale effectively to more intricate state spaces during inference. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fluid Mechanics)
Show Figures

Figure 1

27 pages, 11936 KiB  
Article
Study on Hydraulic Fracture Propagation in Mixed Fine-Grained Sedimentary Rocks and Practice of Volumetric Fracturing Stimulation Techniques
by Hong Mao, Yinghao Shen, Yao Yuan, Kunyu Wu, Lin Xie, Jianhong Huang, Haoting Xing and Youyu Wan
Processes 2024, 12(9), 2030; https://doi.org/10.3390/pr12092030 (registering DOI) - 20 Sep 2024
Abstract
Yingxiongling shale oil is considered a critical area for future crude oil production in the Qaidam Basin. However, the unique features of the Yingxiongling area, such as extraordinary thickness, hybrid sedimentary, and extensive reformation, are faced with several challenges, including an unclear understanding [...] Read more.
Yingxiongling shale oil is considered a critical area for future crude oil production in the Qaidam Basin. However, the unique features of the Yingxiongling area, such as extraordinary thickness, hybrid sedimentary, and extensive reformation, are faced with several challenges, including an unclear understanding of the main controlling factors for hydraulic fracturing propagation, difficulties in selecting engineering sweet layers, and difficulties in optimizing the corresponding fracturing schemes, which restrict the effective development of production. This study focuses on mixed fine-grained sedimentary rocks, employing a high-resolution integrated three-dimensional geological-geomechanical model to simulate fracture propagation. By combining laboratory core experiments, a holistic investigation of the controlling factors was conducted, revealing that hydraulic fracture propagation in mixed fine-grained sedimentary rocks is mainly influenced by rock brittleness, natural fractures, stress, varying lithologies, and fracturing parameters. A comprehensive compressibility evaluation standard was established, considering brittleness, stress contrast, and natural fracture density, with weights of 0.3, 0.23, and 0.47. In light of the high brittleness, substantial interlayer stress differences, and localized developing natural microfractures in the Yingxiongling mixed fine-grained sedimentary rock reservoir, this study examined the influence of various construction parameters on the propagation of hydraulic fractures and optimized these parameters accordingly. Based on the practical application in the field, a “three-stage” stimulation strategy was proposed, which involves using high-viscosity fluid in the front to create the main fracture, low-viscosity fluid with sand-laden slugs to create volume fractures, and continuous high-viscosity fluid carried sand to maintain the conductivity of the fracture network. The resulting oil and gas seepage area corresponding to the stimulated reservoir volume (SRV) matched the actual well spacing of 500 m, achieving the effect of full utilization. The understanding of the controlling factors for fracture expansion, the compressibility evaluation standard, and the main process technology developed in this study effectively guide the optimization of transformation programs for mixed fine-grained sedimentary rocks. Full article
Show Figures

Figure 1

14 pages, 10945 KiB  
Article
Protocol for Pre-Selection of Dwarf Garden Rose Varieties
by Tijana Narandžić, Ljiljana Nikolić, Branka Ljevnaić-Mašić, Biljana Božanić Tanjga, Olivera Ilić, Milana Čurčić and Mirjana Ljubojević
Horticulturae 2024, 10(9), 996; https://doi.org/10.3390/horticulturae10090996 (registering DOI) - 20 Sep 2024
Abstract
Ornamental plant breeding enables the selection of cultivars with desired features from numerous genotypes; however, this process is time-consuming and resource-demanding. Aiming to establish a pre-selection protocol that can facilitate the selection of dwarf rose varieties, the connection between anatomical and histological characteristics [...] Read more.
Ornamental plant breeding enables the selection of cultivars with desired features from numerous genotypes; however, this process is time-consuming and resource-demanding. Aiming to establish a pre-selection protocol that can facilitate the selection of dwarf rose varieties, the connection between anatomical and histological characteristics and the vegetative growth of rose cultivars was examined. To assess the adaptive potential of the studied cultivars, intra-annual cambial dynamics were explored relative to the observed meteorological fluctuations during the growing season. The investigation included six garden rose cultivars from the ‘Reka’ and ‘Pixie’ collections, bred under semi-arid open-field conditions in Serbia. Plant height ranged from 20 to 68 cm, with differing growth habits and types. Vegetative growth was significantly correlated with the xylem/phloem ratio, the proportion of total vessel area relative to cross-sectional and xylem areas, vessel-related features, and porosity (correlation coefficients up to 0.78). Regeneration via cambial activity and the formation of false rings were observed in five of the six cultivars studied, with meteorological analysis suggesting that precipitation and temperature triggered cambial reactivation. This approach effectively targets key parameters in the selection of dwarf and climate-resilient rose cultivars, facilitating the development of reliable pre-selection criteria. Full article
(This article belongs to the Special Issue Cultivation and Breeding of Ornamental Plants)
Show Figures

Figure 1

21 pages, 4348 KiB  
Article
A Novel Ensemble Method of Divide-and-Conquer Markov Boundary Discovery for Causal Feature Selection
by Hao Li, Jianjun Zhan, Haosen Wang and Zipeng Zhao
Mathematics 2024, 12(18), 2927; https://doi.org/10.3390/math12182927 (registering DOI) - 20 Sep 2024
Abstract
The discovery of Markov boundaries is highly effective at identifying features that are causally related to the target variable, providing strong interpretability and robustness. While there are numerous methods for discovering Markov boundaries in real-world applications, no single method is universally applicable to [...] Read more.
The discovery of Markov boundaries is highly effective at identifying features that are causally related to the target variable, providing strong interpretability and robustness. While there are numerous methods for discovering Markov boundaries in real-world applications, no single method is universally applicable to all datasets. Therefore, in order to balance precision and recall, we propose an ensemble framework of divide-and-conquer Markov boundary discovery algorithms based on U-I selection strategy. We put three divide-and-conquer Markov boundary methods into the framework to obtain an ensemble algorithm, focusing on judging controversial parent–child variables to further balance precision and recall. By combining multiple algorithms, the ensemble algorithm can leverage their respective strengths and more thoroughly analyze the cause-and-effect relationships of target variables through various perspectives. Furthermore, it can enhance the robustness of the algorithm and reduce dependence on a single algorithm. In the experiment, we select four advanced Markov boundary discovery algorithms as comparison algorithms and compare them on nine benchmark Bayesian networks and three real-world datasets. The results show that EDMB ranks first in the overall ranking, which illustrates the superiority of the integrated algorithm and the effectiveness of the adopted U-I selection strategy. The main contribution of this paper lies in proposing an ensemble framework for divide-and-conquer Markov boundary discovery algorithms, balancing precision and recall through the U-I selection strategy, and judging controversial parent–child variables to enhance algorithm performance and robustness. The advantage of the U-I selection strategy and its difference from existing methods is the ability to independently obtain the maximum precision and recall of multiple algorithms within the ensemble framework. By assessing controversial parent–child variables, it further balances precision and recall, leading to results that are closer to the true Markov boundary. Full article
(This article belongs to the Special Issue Computational Methods and Machine Learning for Causal Inference)
Show Figures

Figure 1

19 pages, 632 KiB  
Article
SMS Scam Detection Application Based on Optical Character Recognition for Image Data Using Unsupervised and Deep Semi-Supervised Learning
by Anjali Shinde, Essa Q. Shahra, Shadi Basurra, Faisal Saeed, Abdulrahman A. AlSewari and Waheb A. Jabbar
Sensors 2024, 24(18), 6084; https://doi.org/10.3390/s24186084 (registering DOI) - 20 Sep 2024
Abstract
The growing problem of unsolicited text messages (smishing) and data irregularities necessitates stronger spam detection solutions. This paper explores the development of a sophisticated model designed to identify smishing messages by understanding the complex relationships among words, images, and context-specific factors, areas that [...] Read more.
The growing problem of unsolicited text messages (smishing) and data irregularities necessitates stronger spam detection solutions. This paper explores the development of a sophisticated model designed to identify smishing messages by understanding the complex relationships among words, images, and context-specific factors, areas that remain underexplored in existing research. To address this, we merge a UCI spam dataset of regular text messages with real-world spam data, leveraging OCR technology for comprehensive analysis. The study employs a combination of traditional machine learning models, including K-means, Non-Negative Matrix Factorization, and Gaussian Mixture Models, along with feature extraction techniques such as TF_IDF and PCA. Additionally, deep learning models like RNN-Flatten, LSTM, and Bi-LSTM are utilized. The selection of these models is driven by their complementary strengths in capturing both the linear and non-linear relationships inherent in smishing messages. Machine learning models are chosen for their efficiency in handling structured text data, while deep learning models are selected for their superior ability to capture sequential dependencies and contextual nuances. The performance of these models is rigorously evaluated using metrics like accuracy, precision, recall, and F1 score, enabling a comparative analysis between the machine learning and deep learning approaches. Notably, the K-means feature extraction with vectorizer achieved 91.01% accuracy, and the KNN-Flatten model reached 94.13% accuracy, emerging as the top performer. The rationale behind highlighting these models is their potential to significantly improve smishing detection rates. For instance, the high accuracy of the KNN-Flatten model suggests its applicability in real-time spam detection systems, but its computational complexity might limit scalability in large-scale deployments. Similarly, while K-means with vectorizer excels in accuracy, it may struggle with the dynamic and evolving nature of smishing attacks, necessitating continual retraining. Full article
Show Figures

Figure 1

20 pages, 7057 KiB  
Article
Weather Condition Clustering for Improvement of Photovoltaic Power Plant Generation Forecasting Accuracy
by Kristina I. Haljasmaa, Andrey M. Bramm, Pavel V. Matrenin and Stanislav A. Eroshenko
Algorithms 2024, 17(9), 419; https://doi.org/10.3390/a17090419 (registering DOI) - 20 Sep 2024
Abstract
Together with the growing interest towards renewable energy sources within the framework of different strategies of various countries, the number of solar power plants keeps growing. However, managing optimal power generation for solar power plants has its own challenges. First comes the problem [...] Read more.
Together with the growing interest towards renewable energy sources within the framework of different strategies of various countries, the number of solar power plants keeps growing. However, managing optimal power generation for solar power plants has its own challenges. First comes the problem of work interruption and reduction in power generation. As the system must be tolerant to the faults, the relevance and significance of short-term forecasting of solar power generation becomes crucial. Within the framework of this research, the applicability of different forecasting methods for short-time forecasting is explained. The main goal of the research is to show an approach regarding how to make the forecast more accurate and overcome the above-mentioned challenges using opensource data as features. The data clustering algorithm based on KMeans is proposed to train unique models for specific groups of data samples to improve the generation forecast accuracy. Based on practical calculations, machine learning models based on Random Forest algorithm are selected which have been proven to have higher efficiency in predicting the generation of solar power plants. The proposed algorithm was successfully tested in practice, with an achieved accuracy near to 90%. Full article
(This article belongs to the Special Issue Algorithms for Time Series Forecasting and Classification)
Show Figures

Graphical abstract

22 pages, 2716 KiB  
Article
Intelligent Identification Method of Low Voltage AC Series Arc Fault Based on Using Residual Model and Rime Optimization Algorithm
by Xiao He, Takahiro Kawaguchi and Seiji Hashimoto
Energies 2024, 17(18), 4675; https://doi.org/10.3390/en17184675 - 20 Sep 2024
Abstract
Aiming at the problem of accurate AC series arc fault detection, this paper proposes a low voltage AC series arc fault intelligent detection model based on deep learning. According to the GB/T 31143—2014 standard, an experimental platform was established. This system comprises a [...] Read more.
Aiming at the problem of accurate AC series arc fault detection, this paper proposes a low voltage AC series arc fault intelligent detection model based on deep learning. According to the GB/T 31143—2014 standard, an experimental platform was established. This system comprises a lower computer (slave computer) and an upper computer (master computer). It facilitates the acquisition of experimental data and the detection of arc faults during the data acquisition process. Based on a one-dimensional Convolutional Neural Network (CNN), Residual model (Res) and RIME optimization algorithm (RIME) are introduced to optimize the CNN. The current signals collected using high-frequency current, low-frequency coupled current, and high-frequency coupled current are used to construct an arc fault feature set for training of the necessary detection model. The experimental results indicate that the RIME optimization algorithm delivers the best performance when optimizing a one-dimensional CNN detection model with an introduced Res. This model achieves a detection accuracy of 99.42% ± 0.13% and a kappa coefficient of 95.69% ± 0.96%. For collection methods, high-frequency coupled current signals are identified as the optimal choice for detecting low-voltage AC series arc faults. Regarding feature selection, random forest-based feature importance ranking proves to be the most effective method. Full article
(This article belongs to the Special Issue Energy, Electrical and Power Engineering: 3rd Edition)
Show Figures

Figure 1

26 pages, 9568 KiB  
Article
Technology for Obtaining Sintered Components with Tailored Electromagnetic Features by Selective Recycling of Printed Circuit Boards
by Romeo Cristian Ciobanu, Mihaela Aradoaei and Cristina Schreiner
Crystals 2024, 14(9), 820; https://doi.org/10.3390/cryst14090820 - 20 Sep 2024
Abstract
The paper presents a technological approach for obtaining sintered components with tailored electromagnetic features from electromagnetically active powders through the selective recycling of electronic waste, in particular scrap electronic components. Printed circuit board (PCB) scraps were submitted to a succession of grinding processes, [...] Read more.
The paper presents a technological approach for obtaining sintered components with tailored electromagnetic features from electromagnetically active powders through the selective recycling of electronic waste, in particular scrap electronic components. Printed circuit board (PCB) scraps were submitted to a succession of grinding processes, followed by progressive magnetic and electrostatic separation, resulting two final fractions: metallic particles and non-metallic particles including different metallic oxides. Three types of powders were analyzed, i.e., powder after fine grinding, after magnetic separation and after electrostatic separation, which were further processed within a spark plasma sintering furnace in order to obtain solid disk samples. All samples contained several classes of oxides, and also residual metals, leading to specific thermal decomposition processes at different temperatures, depending on the nature of the oxides present in the studied materials. The chemical analysis of powders, via spectrometry with X-ray fluorescence—XRF, emphasized the presence of a mixture of metal oxides and traces of metals (mainly Ag), with concentrations diminishing along with the purification process. The most important analysis was related to dielectric parameters, and it was concluded that the powders obtained by the proposed technology could efficiently substitute scarce raw materials actually used as additives in composites, coatings and paints, mainly due to their high permittivity (above 6 in all frequency domains) and, respectively, dielectric loss factor (above 0.2 in all cases, in all frequency domains). We estimate that the technology described in this paper is a sustainable one according to the concept of circular economy, as it could reduce, by a minimum of 15%, the embodied GHG emissions generated from information and communications technology (ICT) devices by advanced recycling. Full article
(This article belongs to the Section Hybrid and Composite Crystalline Materials)
Show Figures

Figure 1

28 pages, 6881 KiB  
Article
Engagement Analysis Using Electroencephalography Signals in Games for Hand Rehabilitation with Dynamic and Random Difficulty Adjustments
by Raúl Daniel García-Ramón, Ericka Janet Rechy-Ramirez, Luz María Alonso-Valerdi and Antonio Marin-Hernandez
Appl. Sci. 2024, 14(18), 8464; https://doi.org/10.3390/app14188464 - 20 Sep 2024
Viewed by 162
Abstract
Background: Traditional physical rehabilitation involves participants performing repetitive body movements with the assistance of physiotherapists. Owing to the exercises’ monotonous nature and lack of reward, participants may become disinterested and cease their recovery. Games could be used as tools to engage participants in [...] Read more.
Background: Traditional physical rehabilitation involves participants performing repetitive body movements with the assistance of physiotherapists. Owing to the exercises’ monotonous nature and lack of reward, participants may become disinterested and cease their recovery. Games could be used as tools to engage participants in the rehabilitation process. Consequently, participants could perform rehabilitation exercises while playing the game, receiving rewards from the experience. Maintaining the players’ engagement requires regularly adjusting the game difficulty. The players’ engagement can be measured using questionnaires and biosignals (e.g., electroencephalography signals—EEG). This study aims to determine whether there is a significant difference in players’ engagement between two game modes with different game difficulty adjustments: non-tailored and tailored modes. Methods: We implemented two game modes which were controlled using hand movements. The features of the game rewards (position and size) were changed in the game scene; hence, the game difficulty could be modified. The non-tailored mode set the features of rewards in the game scene randomly. Conversely, the tailored mode set the features of rewards in the game scene based on the participants’ range of motion using fuzzy logic. Consequently, the game difficulty was adjusted dynamically. Additionally, engagement was computed from 53 healthy participants in both game modes using two EEG sensors: Bitalino Revolution and Unicorn. Specifically, the theta (θ) and alpha (α) bands from the frontal and parietal lobes were computed from the EEG data. A questionnaire was applied to participants after finishing playing both game modes to collect their impressions on the following: their favorite game mode, the game mode that was the easiest to play, the game mode that was the least frustrating to play, the game mode that was the least boring to play, the game mode that was the most entertaining to play, and the game mode that had the fastest game response time. Results: The non-tailored game mode reported the following means of engagement: 6.297 ± 11.274 using the Unicorn sensor, and 3.616 ± 0.771 using the Bitalino sensor. The tailored game mode reported the following means of engagement: 4.408 ± 6.243 using the Unicorn sensor, and 3.619 ± 0.551 using Bitalino. The non-tailored mode reported the highest mean engagement (6.297) when the Unicorn sensor was used to collect EEG signals. Most participants selected the non-tailored game mode as their favorite, and the most entertaining mode, irrespective of the EEG sensor. Conversely, most participants chose the tailored game mode as the easiest, and the least frustrating mode to play, irrespective of the EEG sensor. Conclusions: A Wilcoxon-Signed-Rank test revealed that there was only a significant difference in engagement between game modes when the EEG signal was collected via the Unicorn sensor (p value = 0.04054). Fisher’s exact tests showed significant associations between the game modes (non-tailored, tailored) and the following players’ variables: ease of play using the Unicorn sensor (p value = 0.009341), and frustration using Unicorn sensor (p value = 0.0466). Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare)
Show Figures

Figure 1

24 pages, 7788 KiB  
Article
Microstructure and Mechanical Properties of As-Built Ti-6Al-4V and Ti-6Al-7Nb Alloys Produced by Selective Laser Melting Technology
by Dorota Laskowska, Błażej Bałasz and Wojciech Zawadka
Materials 2024, 17(18), 4604; https://doi.org/10.3390/ma17184604 - 19 Sep 2024
Viewed by 140
Abstract
Additive manufacturing from metal powders using selective laser melting technology is gaining increasing interest in various industries. The purpose of this study was to determine the effect of changes in process parameter values on the relative density, microstructure and mechanical properties of Ti-6Al-4V [...] Read more.
Additive manufacturing from metal powders using selective laser melting technology is gaining increasing interest in various industries. The purpose of this study was to determine the effect of changes in process parameter values on the relative density, microstructure and mechanical properties of Ti-6Al-4V and Ti-6Al-7Nb alloy samples. The experiment was conducted in response to a noticeable gap in the research on the manufacturability of the Ti-6Al-7Nb alloy in SLM technology. This topic is significant given the growing interest in this alloy for biomedical applications. The results of this study indicate that by properly selecting the volumetric energy density (VED), the relative density of the material produced and the surface roughness of the components can be effectively influenced. Microstructural analyses revealed similar patterns in both alloys manufactured under similar conditions, characterized by columnar β phase grains with needle-like α’ phases. Increasing the VED increased the tensile strength of the fabricated Ti-6Al-4V alloy components, while the opposite effect was observed for components fabricated from Ti-6Al-7Nb alloy. At the same time, Ti-6Al-7Nb alloy parts featured higher elongation values, which is desirable from the perspective of biomedical applications. Full article
(This article belongs to the Special Issue Recent Advances in Metal Powder Based Additive Manufacturing)
15 pages, 5216 KiB  
Article
Analyzing Traditional Building Materials: A Case Study on Repair Practices in Konuralp, Düzce-Türkiye
by Özlem Sallı Bideci and Büşra Sabuncu
Architecture 2024, 4(3), 763-777; https://doi.org/10.3390/architecture4030040 - 19 Sep 2024
Viewed by 229
Abstract
Some wrong decisions and faulty practices applied during the repair and restoration of traditional buildings cause more damage to the structures due to the materials used in the repair. The aim of this study is to establish a scientific basis for material selection [...] Read more.
Some wrong decisions and faulty practices applied during the repair and restoration of traditional buildings cause more damage to the structures due to the materials used in the repair. The aim of this study is to establish a scientific basis for material selection in the repair of traditional buildings in the Konuralp region through chemical and petrographic analyses. In this study, brick, mortar, plaster, and wood samples were taken from one registered building in the Konuralp neighborhood of Düzce Province that has survived to the present day by preserving its original structural features and reflecting the characteristics of traditional housing. Chemical and petrographic analyses were carried out on the samples. In line with these analyses, a scientific basis was created for selecting material properties in the repair and reuse processes of traditional buildings and suggestions are made for the analysis of materials specific to traditional buildings in Konuralp. Full article
Show Figures

Figure 1

21 pages, 5960 KiB  
Article
Effective SQL Injection Detection: A Fusion of Binary Olympiad Optimizer and Classification Algorithm
by Bahman Arasteh, Asgarali Bouyer, Seyed Salar Sefati and Razvan Craciunescu
Mathematics 2024, 12(18), 2917; https://doi.org/10.3390/math12182917 - 19 Sep 2024
Viewed by 261
Abstract
Since SQL injection allows attackers to interact with the database of applications, it is regarded as a significant security problem. By applying machine learning algorithms, SQL injection attacks can be identified. Problem: In the training stage of machine learning methods, effective features [...] Read more.
Since SQL injection allows attackers to interact with the database of applications, it is regarded as a significant security problem. By applying machine learning algorithms, SQL injection attacks can be identified. Problem: In the training stage of machine learning methods, effective features are used to develop an optimal classifier that is highly accurate. The specification of the features with the highest efficacy is considered to be an NP-complete combinatorial optimization challenge. Selecting the most effective features refers to the procedure of identifying the smallest and most effective features in the dataset. The rationale behind this paper is to optimize the accuracy, precision, and sensitivity parameters of the SQL injection attack detection method. Method: In this paper, a method for identifying SQL injection attacks was suggested. In the first step, a particular training dataset that included 13 features was developed. In the second step, to specify the best features of the dataset, a specific binary variety of the Olympiad optimization algorithm was developed. Various machine learning algorithms were used to create the optimal attack detector. Results: Based on the experiments carried out, the suggested SQL injection detector using an artificial neural network and the feature selector can achieve 99.35% accuracy, 100% precision, and 100% sensitivity. Owing to selecting about 30% of the effective features, the proposed method enhanced the efficacy of SQL injection detectors. Full article
Show Figures

Figure 1

18 pages, 2206 KiB  
Article
Simultaneous Instance and Attribute Selection for Noise Filtering
by Yenny Villuendas-Rey, Claudia C. Tusell-Rey and Oscar Camacho-Nieto
Appl. Sci. 2024, 14(18), 8459; https://doi.org/10.3390/app14188459 - 19 Sep 2024
Viewed by 234
Abstract
The existence of noise is inherent to most real data that are collected. Removing or reducing noise can help classification algorithms focus on relevant patterns, preventing them from being affected by irrelevant or incorrect information. This can result in more accurate and reliable [...] Read more.
The existence of noise is inherent to most real data that are collected. Removing or reducing noise can help classification algorithms focus on relevant patterns, preventing them from being affected by irrelevant or incorrect information. This can result in more accurate and reliable models, improving their ability to generalize and make accurate predictions on new data. For example, among the main disadvantages of the nearest neighbor classifier are its noise sensitivity and its high computational cost (for classification and storage). Thus, noise filtering is essential to ensure data quality and the effectiveness of supervised classification models. The simultaneous selection of attributes and instances for supervised classifiers was introduced in the last decade. However, the proposed solutions present several drawbacks because some are either stochastic or do not handle noisy domains, and the neighborhood selection of some algorithms allows very dissimilar objects to be considered as neighbors. In addition, the design of some methods is just for specific classifiers without generalization possibilities. This article introduces an instance and attribute selection model, which seeks to detect and eliminate existing noise while reducing the feature space. In addition, the proposal is deterministic and does not predefine any supervised classifier. The experiments allow us to establish the viability of the proposal and its effectiveness in eliminating noise. Full article
Show Figures

Figure 1

17 pages, 6078 KiB  
Article
Matchability and Uncertainty-Aware Iterative Disparity Refinement for Stereo Matching
by Junwei Wang, Wei Zhou, Yujun Tang and Hanming Guo
Appl. Sci. 2024, 14(18), 8457; https://doi.org/10.3390/app14188457 - 19 Sep 2024
Viewed by 190
Abstract
After significant progress in stereo matching, the pursuit of robust and efficient ill-posed-region disparity refinement methods remains challenging. To further improve the performance of disparity refinement, in this paper, we propose the matchability and uncertainty-aware iterative disparity refinement neural network. Firstly, a new [...] Read more.
After significant progress in stereo matching, the pursuit of robust and efficient ill-posed-region disparity refinement methods remains challenging. To further improve the performance of disparity refinement, in this paper, we propose the matchability and uncertainty-aware iterative disparity refinement neural network. Firstly, a new matchability and uncertainty decoder (MUD) is proposed to decode the matchability mask and disparity uncertainties, which are used to evaluate the reliability of feature matching and estimated disparity, thereby reducing the susceptibility to mismatched pixels. Then, based on the proposed MUD, we present two modules: the uncertainty-preferred disparity field initialization (UFI) and the masked hidden state global aggregation (MGA) modules. In the UFI, a multi-disparity window scan-and-select method is employed to provide a further initialized disparity field and more accurate initial disparity. In the MGA, the adaptive masked disparity field hidden state is globally aggregated to extend the propagation range per iteration, improving the refinement efficiency. Finally, the experimental results on public datasets show that the proposed model achieves a reduction up to 17.9% in disparity average error and 16.9% in occluded outlier proportion, respectively, demonstrating its more practical handling of ill-posed regions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop