Topic Editors

Department of Advanced Computational Methods, Faculty of Science and Technology Jan Dlugosz University in Czestochowa 13/15 Armii Krajowej Av., 42-200 Czestochowa, Poland
Dr. Yunfei Gao
Shanghai Engineering Research Center of Coal Gasification, East China University of Science and Technology, Shanghai 200237, China
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 13/15 Armii Krajowej Av., 42-200 Czestochowa, Poland
Dr. Ghulam Moeen Uddin
Department of Mechanical Engineering, University of Engineering & Technology, Lahore, Punjab 54890, Pakistan
Dr. Anna Kulakowska
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Division of Advanced Computational Methods, Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 42-200 Czestochowa, Poland
Dr. Bachil El Fil
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

Artificial Intelligence and Computational Methods: Modeling, Simulations and Optimization of Complex Systems

Abstract submission deadline
closed (30 September 2022)
Manuscript submission deadline
closed (20 October 2023)
Viewed by
186833

Topic Information

Dear Colleagues,

Due to the increasing computational capability of current data processing systems, new opportunities emerge in the modeling, simulations, and optimization of complex systems and devices. Methods that are difficult to apply, highly demanding, and time-consuming may now be considered when developing complete and sophisticated models in many areas of science and technology. The combination of computational methods and AI algorithms allows conducting multi-threaded analyses to solve advanced and interdisciplinary problems. This article collection aims to bring together research on advances in modeling, simulations, and optimization issues of complex systems. Original research, as well as review articles and short communications, with a particular focus on (but not limited to) artificial intelligence and other computational methods, are welcomed.

Prof. Dr. Jaroslaw Krzywanski
Dr. Yunfei Gao
Dr. Marcin Sosnowski
Dr. Karolina Grabowska
Dr. Dorian Skrobek
Dr. Ghulam Moeen Uddin
Dr. Anna Kulakowska
Dr. Anna Zylka
Dr. Bachil El Fil
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • artificial neural networks
  • deep learning
  • genetic and evolutionary algorithms
  • artificial immune systems
  • fuzzy logic
  • expert systems
  • bio-inspired methods
  • CFD
  • modeling
  • simulation
  • optimization
  • complex systems

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Entropy
entropy
2.1 4.9 1999 22.4 Days CHF 2600
Algorithms
algorithms
1.8 4.1 2008 15 Days CHF 1600
Computation
computation
1.9 3.5 2013 19.7 Days CHF 1800
Machine Learning and Knowledge Extraction
make
4.0 6.3 2019 27.1 Days CHF 1800
Energies
energies
3.0 6.2 2008 17.5 Days CHF 2600
Materials
materials
3.1 5.8 2008 15.5 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (75 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
29 pages, 2155 KiB  
Review
Advanced Computational Methods for Modeling, Prediction and Optimization—A Review
by Jaroslaw Krzywanski, Marcin Sosnowski, Karolina Grabowska, Anna Zylka, Lukasz Lasek and Agnieszka Kijo-Kleczkowska
Materials 2024, 17(14), 3521; https://doi.org/10.3390/ma17143521 - 16 Jul 2024
Cited by 1 | Viewed by 2912
Abstract
This paper provides a comprehensive review of recent advancements in computational methods for modeling, simulation, and optimization of complex systems in materials engineering, mechanical engineering, and energy systems. We identified key trends and highlighted the integration of artificial intelligence (AI) with traditional computational [...] Read more.
This paper provides a comprehensive review of recent advancements in computational methods for modeling, simulation, and optimization of complex systems in materials engineering, mechanical engineering, and energy systems. We identified key trends and highlighted the integration of artificial intelligence (AI) with traditional computational methods. Some of the cited works were previously published within the topic: “Computational Methods: Modeling, Simulations, and Optimization of Complex Systems”; thus, this article compiles the latest reports from this field. The work presents various contemporary applications of advanced computational algorithms, including AI methods. It also introduces proposals for novel strategies in materials production and optimization methods within the energy systems domain. It is essential to optimize the properties of materials used in energy. Our findings demonstrate significant improvements in accuracy and efficiency, offering valuable insights for researchers and practitioners. This review contributes to the field by synthesizing state-of-the-art developments and suggesting directions for future research, underscoring the critical role of these methods in advancing engineering and technological solutions. Full article
Show Figures

Figure 1

1 pages, 124 KiB  
Correction
Correction: Zhang et al. Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace. Materials 2023, 16, 1164
by Xi Zhang, Guiyun Zhang, Dong Zhang and Liping Zhang
Materials 2024, 17(6), 1233; https://doi.org/10.3390/ma17061233 - 7 Mar 2024
Viewed by 785
Abstract
In consideration of the contributions to this work, Feng Qian unequivocally requests the removal of his name from the author list of this publication [...] Full article
33 pages, 3390 KiB  
Review
Distributed Learning in the IoT–Edge–Cloud Continuum
by Audris Arzovs, Janis Judvaitis, Krisjanis Nesenbergs and Leo Selavo
Mach. Learn. Knowl. Extr. 2024, 6(1), 283-315; https://doi.org/10.3390/make6010015 - 1 Feb 2024
Cited by 2 | Viewed by 2743
Abstract
The goal of the IoT–Edge–Cloud Continuum approach is to distribute computation and data loads across multiple types of devices taking advantage of the different strengths of each, such as proximity to the data source, data access, or computing power, while mitigating potential weaknesses. [...] Read more.
The goal of the IoT–Edge–Cloud Continuum approach is to distribute computation and data loads across multiple types of devices taking advantage of the different strengths of each, such as proximity to the data source, data access, or computing power, while mitigating potential weaknesses. Most current machine learning operations are currently concentrated on remote high-performance computing devices, such as the cloud, which leads to challenges related to latency, privacy, and other inefficiencies. Distributed learning approaches can address these issues by enabling the distribution of machine learning operations throughout the IoT–Edge–Cloud Continuum by incorporating Edge and even IoT layers into machine learning operations more directly. Approaches like transfer learning could help to transfer the knowledge from more performant IoT–Edge–Cloud Continuum layers to more resource-constrained devices, e.g., IoT. The implementation of these methods in machine learning operations, including the related data handling security and privacy approaches, is challenging and actively being researched. In this article the distributed learning and transfer learning domains are researched, focusing on security, robustness, and privacy aspects, and their potential usage in the IoT–Edge–Cloud Continuum, including research on tools to use for implementing these methods. To achieve this, we have reviewed 145 sources and described the relevant methods as well as their relevant attack vectors and provided suggestions on mitigation. Full article
Show Figures

Figure 1

31 pages, 626 KiB  
Review
Economic Dispatch Optimization Strategies and Problem Formulation: A Comprehensive Review
by Fatemeh Marzbani and Akmal Abdelfatah
Energies 2024, 17(3), 550; https://doi.org/10.3390/en17030550 - 23 Jan 2024
Cited by 8 | Viewed by 3816
Abstract
Economic Dispatch Problems (EDP) refer to the process of determining the power output of generation units such that the electricity demand of the system is satisfied at a minimum cost while technical and operational constraints of the system are satisfied. This procedure is [...] Read more.
Economic Dispatch Problems (EDP) refer to the process of determining the power output of generation units such that the electricity demand of the system is satisfied at a minimum cost while technical and operational constraints of the system are satisfied. This procedure is vital in the efficient energy management of electricity networks since it can ensure the reliable and efficient operation of power systems. As power systems transition from conventional to modern ones, new components and constraints are introduced to power systems, making the EDP increasingly complex. This highlights the importance of developing advanced optimization techniques that can efficiently handle these new complexities to ensure optimal operation and cost-effectiveness of power systems. This review paper provides a comprehensive exploration of the EDP, encompassing its mathematical formulation and the examination of commonly used problem formulation techniques, including single and multi-objective optimization methods. It also explores the progression of paradigms in economic dispatch, tracing the journey from traditional methods to contemporary strategies in power system management. The paper categorizes the commonly utilized techniques for solving EDP into four groups: conventional mathematical approaches, uncertainty modelling methods, artificial intelligence-driven techniques, and hybrid algorithms. It identifies critical research gaps, a predominant focus on single-case studies that limit the generalizability of findings, and the challenge of comparing research due to arbitrary system choices and formulation variations. The present paper calls for the implementation of standardized evaluation criteria and the inclusion of a diverse range of case studies to enhance the practicality of optimization techniques in the field. Full article
Show Figures

Figure 1

19 pages, 8547 KiB  
Article
Stepwise Identification Method of Thermal Load for Box Structure Based on Deep Learning
by Hongze Du, Qi Xu, Lizhe Jiang, Yufeng Bu, Wenbo Li and Jun Yan
Materials 2024, 17(2), 357; https://doi.org/10.3390/ma17020357 - 10 Jan 2024
Cited by 2 | Viewed by 809
Abstract
Accurate and rapid thermal load identification based on limited measurement points is crucial for spacecraft on-orbit monitoring. This study proposes a stepwise identification method based on deep learning for identifying structural thermal loads that efficiently map the local responses and overall thermal load [...] Read more.
Accurate and rapid thermal load identification based on limited measurement points is crucial for spacecraft on-orbit monitoring. This study proposes a stepwise identification method based on deep learning for identifying structural thermal loads that efficiently map the local responses and overall thermal load of a box structure. To determine the location and magnitude of the thermal load accurately, the proposed method segments a structure into several subregions and applies a cascade of deep learning models to gradually reduce the solution domain. The generalization ability of the model is significantly enhanced by the inclusion of boundary conditions in the deep learning models. In this study, a large simulated dataset was generated by varying the load application position and intensity for each sample. The input variables encompass a small set of structural displacements, while the outputs include parameters related to the thermal load, such as the position and magnitude of the load. Ablation experiments are conducted to validate the effectiveness of this approach. The results show that this method reduces the identification error of the thermal load parameters by more than 45% compared with a single deep learning network. The proposed method holds promise for optimizing the design and analysis of spacecraft structures, contributing to improved performance and reliability in future space missions. Full article
Show Figures

Figure 1

13 pages, 6702 KiB  
Communication
Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm
by Yi Wang, Yating Xu, Tianjian Li, Tao Zhang and Jian Zou
Algorithms 2023, 16(12), 574; https://doi.org/10.3390/a16120574 - 18 Dec 2023
Cited by 1 | Viewed by 1641
Abstract
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses [...] Read more.
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring. Full article
Show Figures

Figure 1

19 pages, 8847 KiB  
Review
Review of Approaches to Minimise the Cost of Simulation-Based Optimisation for Liquid Composite Moulding Processes
by Boon Xian Chai, Boris Eisenbart, Mostafa Nikzad, Bronwyn Fox, Yuqi Wang, Kyaw Hlaing Bwar and Kaiyu Zhang
Materials 2023, 16(24), 7580; https://doi.org/10.3390/ma16247580 - 9 Dec 2023
Cited by 16 | Viewed by 1633
Abstract
The utilisation of numerical process simulation has greatly facilitated the challenging task of liquid composite moulding (LCM) process optimisation, providing ease of solution evaluation at a significantly reduced cost compared to complete reliance on physical prototyping. However, due to the process complexity, such [...] Read more.
The utilisation of numerical process simulation has greatly facilitated the challenging task of liquid composite moulding (LCM) process optimisation, providing ease of solution evaluation at a significantly reduced cost compared to complete reliance on physical prototyping. However, due to the process complexity, such process simulation is still considerably expensive at present. In this paper, cost-saving approaches to minimising the computational cost of simulation-based optimisation for LCM processes are compiled and discussed. Their specific applicability, efficacy, and suitability for various optimisation/moulding scenarios are extensively explored in detail. The comprehensive analysation and assimilation of their operation alongside applicability for the problem domain of interest are accomplished in this paper to further complement and contribute to future simulation-based optimisation capabilities for composite moulding processes. The importance of balancing the cost-accuracy trade-off is also repeatedly emphasised, allowing for substantial cost reductions while ensuring a desirable level of optimization reliability. Full article
Show Figures

Figure 1

23 pages, 2752 KiB  
Article
A Stochastic Load Forecasting Approach to Prevent Transformer Failures and Power Quality Issues Amid the Evolving Electrical Demands Facing Utilities
by John O’Donnell and Wencong Su
Energies 2023, 16(21), 7251; https://doi.org/10.3390/en16217251 - 25 Oct 2023
Cited by 1 | Viewed by 1172
Abstract
New technologies, such as electric vehicles, rooftop solar, and behind-the-meter storage, will lead to increased variation in electrical load, and the location and time of the penetration of these technologies are uncertain. Power quality, reliability, and protection issues can be the result if [...] Read more.
New technologies, such as electric vehicles, rooftop solar, and behind-the-meter storage, will lead to increased variation in electrical load, and the location and time of the penetration of these technologies are uncertain. Power quality, reliability, and protection issues can be the result if electric utilities do not consider the probability of load scenarios that have not yet occurred. The authors’ approach to addressing these concerns started with collecting the electrical load data for an expansive and diverse set of distribution transformers. This provided approximately two-and-a-half years of data that were used to develop new methods that will enable engineers to address emerging issues. The efficacy of the methods was then assessed with a real-world test dataset that was not used in the development of the new methods. This resulted in an approach to efficiently generate stochastic electrical load forecasts for elements of distribution circuits. Methods are also described that use those forecasts for engineering analysis that predict the likelihood of distribution transformer failures and power quality events. 100% of the transformers identified as most likely to fail either did fail or identified a data correction opportunity. The accuracy of the power quality results was 92% while allowing for a balance between measures of efficiency and customer satisfaction. Full article
Show Figures

Figure 1

24 pages, 1788 KiB  
Article
Multi-Objective Optimization of Thin-Walled Composite Axisymmetric Structures Using Neural Surrogate Models and Genetic Algorithms
by Bartosz Miller and Leonard Ziemiański
Materials 2023, 16(20), 6794; https://doi.org/10.3390/ma16206794 - 20 Oct 2023
Cited by 3 | Viewed by 1342
Abstract
Composite shells find diverse applications across industries due to their high strength-to-weight ratio and tailored properties. Optimizing parameters such as matrix-reinforcement ratio and orientation of the reinforcement is crucial for achieving the desired performance metrics. Stochastic optimization, specifically genetic algorithms, offer solutions, yet [...] Read more.
Composite shells find diverse applications across industries due to their high strength-to-weight ratio and tailored properties. Optimizing parameters such as matrix-reinforcement ratio and orientation of the reinforcement is crucial for achieving the desired performance metrics. Stochastic optimization, specifically genetic algorithms, offer solutions, yet their computational intensity hinders widespread use. Surrogate models, employing neural networks, emerge as efficient alternatives by approximating objective functions and bypassing costly computations. This study investigates surrogate models in multi-objective optimization of composite shells. It incorporates deep neural networks to approximate relationships between input parameters and key metrics, enabling exploration of design possibilities. Incorporating mode shape identification enhances accuracy, especially in multi-criteria optimization. Employing network ensembles strengthens reliability by mitigating model weaknesses. Efficiency analysis assesses required computations, managing the trade-off between cost and accuracy. Considering complex input parameters and comparing against the Monte Carlo approach further demonstrates the methodology’s efficacy. This work showcases the successful integration of network ensembles employed as surrogate models and mode shape identification, enhancing multi-objective optimization in engineering applications. The approach’s efficiency in handling intricate designs and enhancing accuracy has broad implications for optimization methodologies. Full article
Show Figures

Figure 1

23 pages, 1188 KiB  
Article
Entropy-Aware Time-Varying Graph Neural Networks with Generalized Temporal Hawkes Process: Dynamic Link Prediction in the Presence of Node Addition and Deletion
by Bahareh Najafi, Saeedeh Parsaeefard and Alberto Leon-Garcia
Mach. Learn. Knowl. Extr. 2023, 5(4), 1359-1381; https://doi.org/10.3390/make5040069 - 4 Oct 2023
Viewed by 2146
Abstract
This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, [...] Read more.
This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, we consider both the addition and deletion of nodes and edges as events. These events occur at irregular time scales and are modeled using temporal point processes. Our goal is to learn the conditional intensity function of the temporal point process to investigate the influence of deletion events on node representation learning for link-level prediction. We incorporate network entropy, a measure of node and edge significance, to capture the effect of node deletion and edge removal in our framework. Additionally, we leveraged the characteristics of a generalized temporal Hawkes process, which considers the inhibitory effects of events where past occurrences can reduce future intensity. This framework enables dynamic representation learning by effectively modeling both addition and deletion events in the temporal graph. To evaluate our approach, we utilize autonomous system graphs, a family of inhomogeneous sparse graphs with instances of node and edge additions and deletions, in a link prediction task. By integrating these enhancements into our framework, we improve the accuracy of dynamic link prediction and enable better understanding of the dynamic evolution of complex networks. Full article
Show Figures

Figure 1

19 pages, 2041 KiB  
Article
Predicting the Long-Term Dependencies in Time Series Using Recurrent Artificial Neural Networks
by Cristian Ubal, Gustavo Di-Giorgi, Javier E. Contreras-Reyes and Rodrigo Salas
Mach. Learn. Knowl. Extr. 2023, 5(4), 1340-1358; https://doi.org/10.3390/make5040068 - 2 Oct 2023
Cited by 7 | Viewed by 3694
Abstract
Long-term dependence is an essential feature for the predictability of time series. Estimating the parameter that describes long memory is essential to describing the behavior of time series models. However, most long memory estimation methods assume that this parameter has a constant value [...] Read more.
Long-term dependence is an essential feature for the predictability of time series. Estimating the parameter that describes long memory is essential to describing the behavior of time series models. However, most long memory estimation methods assume that this parameter has a constant value throughout the time series, and do not consider that the parameter may change over time. In this work, we propose an automated methodology that combines the estimation methodologies of the fractional differentiation parameter (and/or Hurst parameter) with its application to Recurrent Neural Networks (RNNs) in order for said networks to learn and predict long memory dependencies from information obtained in nonlinear time series. The proposal combines three methods that allow for better approximation in the prediction of the values of the parameters for each one of the windows obtained, using Recurrent Neural Networks as an adaptive method to learn and predict the dependencies of long memory in Time Series. For the RNNs, we have evaluated four different architectures: the Simple RNN, LSTM, the BiLSTM, and the GRU. These models are built from blocks with gates controlling the cell state and memory. We have evaluated the proposed approach using both synthetic and real-world data sets. We have simulated ARFIMA models for the synthetic data to generate several time series by varying the fractional differentiation parameter. We have evaluated the proposed approach using synthetic and real datasets using Whittle’s estimates of the Hurst parameter classically obtained in each window. We have simulated ARFIMA models in such a way that the synthetic data generate several time series by varying the fractional differentiation parameter. The real-world IPSA stock option index and Tree Ringtime series datasets were evaluated. All of the results show that the proposed approach can predict the Hurst exponent with good performance by selecting the optimal window size and overlap change. Full article
Show Figures

Figure 1

15 pages, 8626 KiB  
Article
Optimization and Prediction of Different Building Forms for Thermal Energy Performance in the Hot Climate of Cairo Using Genetic Algorithm and Machine Learning
by Amany Khalil, Anas M. Hosney Lila and Nouran Ashraf
Computation 2023, 11(10), 192; https://doi.org/10.3390/computation11100192 - 2 Oct 2023
Cited by 5 | Viewed by 2154
Abstract
The climate change crisis has resulted in the need to use sustainable methods in architectural design, including building form and orientation decisions that can save a significant amount of energy consumed by a building. Several previous studies have optimized building form and envelope [...] Read more.
The climate change crisis has resulted in the need to use sustainable methods in architectural design, including building form and orientation decisions that can save a significant amount of energy consumed by a building. Several previous studies have optimized building form and envelope for energy performance, but the isolated effect of varieties of possible architectural forms for a specific climate has not been fully investigated. This paper proposes four novel office building form generation methods (the polygon that varies between pentagon and decagon; the pixels that are complex cubic forms; the letters including H, L, U, T; cross and complex cubic forms; and the round family including circular and oval forms) and evaluates their annual thermal energy use intensity (EUI) for Cairo (hot climate). Results demonstrated the applicability of the proposed methods in enhancing the energy performance of the new forms in comparison to the base case. The results of the optimizations are compared together, and the four families are discussed in reference to their different architectural aspects and performance. Scatterplots are developed for the round family (highest performance) to test the impact of each dynamic parameter on EUI. The round family optimization process takes a noticeably high calculation time in comparison to other families. Therefore, an Artificial Neural Network (ANN) prediction model is developed for the round family after simulating 1726 iterations. Training of 1200 configurations is used to predict annual EUI for the remaining 526 iterations. The ANN predicted values are compared against the trained to determine the time saved and accuracy. Full article
Show Figures

Figure 1

26 pages, 466 KiB  
Article
Iterated Clique Reductions in Vertex Weighted Coloring for Large Sparse Graphs
by Yi Fan, Zaijun Zhang, Quan Yu, Yongxuan Lai, Kaile Su, Yiyuan Wang, Shiwei Pan and Longin Jan Latecki
Entropy 2023, 25(10), 1376; https://doi.org/10.3390/e25101376 - 24 Sep 2023
Viewed by 1272
Abstract
The Minimum Vertex Weighted Coloring (MinVWC) problem is an important generalization of the classic Minimum Vertex Coloring (MinVC) problem which is NP-hard. Given a simple undirected graph G=(V,E), the MinVC problem is to find a coloring [...] Read more.
The Minimum Vertex Weighted Coloring (MinVWC) problem is an important generalization of the classic Minimum Vertex Coloring (MinVC) problem which is NP-hard. Given a simple undirected graph G=(V,E), the MinVC problem is to find a coloring s.t. any pair of adjacent vertices are assigned different colors and the number of colors used is minimized. The MinVWC problem associates each vertex with a positive weight and defines the weight of a color to be the weight of its heaviest vertices, then the goal is the find a coloring that minimizes the sum of weights over all colors. Among various approaches, reduction is an effective one. It tries to obtain a subgraph whose optimal solutions can conveniently be extended into optimal ones for the whole graph, without costly branching. In this paper, we propose a reduction algorithm based on maximal clique enumeration. More specifically our algorithm utilizes a certain proportion of maximal cliques and obtains lower bounds in order to perform reductions. It alternates between clique sampling and graph reductions and consists of three successive procedures: promising clique reductions, better bound reductions and post reductions. Experimental results show that our algorithm returns considerably smaller subgraphs for numerous large benchmark graphs, compared to the most recent method named RedLS. Also, we evaluate individual impacts and some practical properties of our algorithm. Furthermore, we have a theorem which indicates that the reduction effects of our algorithm are equivalent to that of a counterpart which enumerates all maximal cliques in the whole graph if the run time is sufficiently long. Full article
Show Figures

Figure 1

27 pages, 1127 KiB  
Article
Automatic Genre Identification for Robust Enrichment of Massive Text Collections: Investigation of Classification Methods in the Era of Large Language Models
by Taja Kuzman, Igor Mozetič and Nikola Ljubešić
Mach. Learn. Knowl. Extr. 2023, 5(3), 1149-1175; https://doi.org/10.3390/make5030059 - 12 Sep 2023
Cited by 7 | Viewed by 3155
Abstract
Massive text collections are the backbone of large language models, the main ingredient of the current significant progress in artificial intelligence. However, as these collections are mostly collected using automatic methods, researchers have few insights into what types of texts they consist of. [...] Read more.
Massive text collections are the backbone of large language models, the main ingredient of the current significant progress in artificial intelligence. However, as these collections are mostly collected using automatic methods, researchers have few insights into what types of texts they consist of. Automatic genre identification is a text classification task that enriches texts with genre labels, such as promotional and legal, providing meaningful insights into the composition of these large text collections. In this paper, we evaluate machine learning approaches for the genre identification task based on their generalizability across different datasets to assess which model is the most suitable for the downstream task of enriching large web corpora with genre information. We train and test multiple fine-tuned BERT-like Transformer-based models and show that merging different genre-annotated datasets yields superior results. Moreover, we explore the zero-shot capabilities of large GPT Transformer models in this task and discuss the advantages and disadvantages of the zero-shot approach. We also publish the best-performing fine-tuned model that enables automatic genre annotation in multiple languages. In addition, to promote further research in this area, we plan to share, upon request, a new benchmark for automatic genre annotation, ensuring the non-exposure of the latest large language models. Full article
Show Figures

Figure 1

18 pages, 6558 KiB  
Article
Artificial Neural Networks for Predicting the Diameter of Electrospun Nanofibers Synthesized from Solutions/Emulsions of Biopolymers and Oils
by Guadalupe Cuahuizo-Huitzil, Octavio Olivares-Xometl, María Eugenia Castro, Paulina Arellanes-Lozada, Francisco J. Meléndez-Bustamante, Ivo Humberto Pineda Torres, Claudia Santacruz-Vázquez and Verónica Santacruz-Vázquez
Materials 2023, 16(16), 5720; https://doi.org/10.3390/ma16165720 - 21 Aug 2023
Cited by 2 | Viewed by 1395
Abstract
In the present work, different configurations of nt iartificial neural networks (ANNs) were analyzed in order to predict the experimental diameter of nanofibers produced by means of the electrospinning process and employing polyvinyl alcohol (PVA), PVA/chitosan (CS) and PVA/aloe vera (Av) solutions. [...] Read more.
In the present work, different configurations of nt iartificial neural networks (ANNs) were analyzed in order to predict the experimental diameter of nanofibers produced by means of the electrospinning process and employing polyvinyl alcohol (PVA), PVA/chitosan (CS) and PVA/aloe vera (Av) solutions. In addition, gelatin type A (GT)/alpha-tocopherol (α-TOC), PVA/olive oil (OO), PVA/orange essential oil (OEO), and PVA/anise oil (AO) emulsions were used. The experimental diameters of the nanofibers electrospun from the different tested systems were obtained using scanning electron microscopy (SEM) and ranged from 93.52 nm to 352.1 nm. Of the three studied ANNs, the one that displayed the best prediction results was the one with three hidden layers with the flow rate, voltage, viscosity, and conductivity variables. The calculation error between the experimental and calculated diameters was 3.79%. Additionally, the correlation coefficient (R2) was identified as a function of the ANN configuration, obtaining values of 0.96, 0.98, and 0.98 for one, two, and three hidden layer(s), respectively. It was found that an ANN configuration having more than three hidden layers did not improve the prediction of the experimental diameter of synthesized nanofibers. Full article
Show Figures

Figure 1

16 pages, 2395 KiB  
Review
Physical and Mathematical Models of Micro-Explosions: Achievements and Directions of Improvement
by Dmitrii V. Antonov, Roman M. Fedorenko, Leonid S. Yanovskiy and Pavel A. Strizhak
Energies 2023, 16(16), 6034; https://doi.org/10.3390/en16166034 - 17 Aug 2023
Cited by 7 | Viewed by 1593
Abstract
The environmental, economic, and energy problems of the modern world motivate the development of alternative fuel technologies. Multifuel technology can help reduce the carbon footprint and waste from the raw materials sector as well as slow down the depletion of energy resources. However, [...] Read more.
The environmental, economic, and energy problems of the modern world motivate the development of alternative fuel technologies. Multifuel technology can help reduce the carbon footprint and waste from the raw materials sector as well as slow down the depletion of energy resources. However, there are limitations to the active use of multifuel mixtures in real power plants and engines because they are difficult to spray in combustion chambers and require secondary atomization. Droplet micro-explosion seems the most promising secondary atomization technology in terms of its integral characteristics. This review paper outlines the most interesting approaches to modeling micro-explosions using in-house computer codes and commercial software packages. A physical model of a droplet micro-explosion based on experimental data was analyzed to highlight the schemes and mathematical expressions describing the critical conditions of parent droplet atomization. Approaches are presented that can predict the number, sizes, velocities, and trajectories of emerging child droplets. We also list the empirical data necessary for developing advanced fragmentation models. Finally, we outline the main growth areas for micro-explosion models catering for the needs of spray technology. Full article
Show Figures

Figure 1

31 pages, 3978 KiB  
Article
Identifying the Regions of a Space with the Self-Parameterized Recursively Assessed Decomposition Algorithm (SPRADA)
by Dylan Molinié, Kurosh Madani, Véronique Amarger and Abdennasser Chebira
Mach. Learn. Knowl. Extr. 2023, 5(3), 979-1009; https://doi.org/10.3390/make5030051 - 4 Aug 2023
Viewed by 1774
Abstract
This paper introduces a non-parametric methodology based on classical unsupervised clustering techniques to automatically identify the main regions of a space, without requiring the objective number of clusters, so as to identify the major regular states of unknown industrial systems. Indeed, useful knowledge [...] Read more.
This paper introduces a non-parametric methodology based on classical unsupervised clustering techniques to automatically identify the main regions of a space, without requiring the objective number of clusters, so as to identify the major regular states of unknown industrial systems. Indeed, useful knowledge on real industrial processes entails the identification of their regular states, and their historically encountered anomalies. Since both should form compact and salient groups of data, unsupervised clustering generally performs this task fairly accurately; however, this often requires the number of clusters upstream, knowledge which is rarely available. As such, the proposed algorithm operates a first partitioning of the space, then it estimates the integrity of the clusters, and splits them again and again until every cluster obtains an acceptable integrity; finally, a step of merging based on the clusters’ empirical distributions is performed to refine the partitioning. Applied to real industrial data obtained in the scope of a European project, this methodology proved able to automatically identify the main regular states of the system. Results show the robustness of the proposed approach in the fully-automatic and non-parametric identification of the main regions of a space, knowledge which is useful to industrial anomaly detection and behavioral modeling. Full article
Show Figures

Figure 1

23 pages, 5533 KiB  
Article
Optimization of Circulating Fluidized Bed Boiler Combustion Key Control Parameters Based on Machine Learning
by Lei Han, Lingmei Wang, Hairui Yang, Chengzhen Jia, Enlong Meng, Yushan Liu and Shaoping Yin
Energies 2023, 16(15), 5674; https://doi.org/10.3390/en16155674 - 28 Jul 2023
Cited by 3 | Viewed by 1697
Abstract
During the coal-fired circulating fluidized bed unit participation in the peak regulation process of the power grid, the thermal automatic control system assists the operator to adjust the mode focusing on pollutant control and ignoring the economy so that the unit’s operating performance [...] Read more.
During the coal-fired circulating fluidized bed unit participation in the peak regulation process of the power grid, the thermal automatic control system assists the operator to adjust the mode focusing on pollutant control and ignoring the economy so that the unit’s operating performance maintains a huge potential for deep mining. The high-dimensional and coupling-related data characteristics of circulating fluidized bed boilers put forward more refined and demanding requirements for combustion optimization analysis and open-loop guidance operation. Therefore, this paper proposes a combustion optimization method that incorporates neighborhood rough set machine learning. This method first reduces the control parameters affecting multi-objective combustion optimization with the neighborhood rough set algorithm that fully considers the correlation of each variable combination and then establishes a multi-objective combustion optimization prediction model by combining the online calculation of boiler thermal efficiency. Finally, the NSGAII algorithm realizes the optimization of the control parameter setting value of the boiler combustion system. The results show that this method reduces the number of control commands involved in combustion optimization adjustment from 26 to 11. At the same time, based on the optimization results obtained by using traditional combustion optimization methods under high, medium, and medium-low load conditions, the boiler thermal efficiency increased by 0.07%, decreased by 0.02%, and increased by 0.55%, respectively, and the nitrogen oxide emission concentration decreased by 5.02 mg/Nm3, 7.77 mg/Nm3, and 7.03 mg/Nm3, respectively. The implementation of this method can help better account for the economy and pollutant discharge of the boiler combustion system during the variable working conditions, guide the operators to adjust the combustion more accurately, and effectively reduce the ineffective energy consumption in the adjustment process. The proposal and application of this method laid the foundation for the construction of smart power plants. Full article
Show Figures

Figure 1

21 pages, 8324 KiB  
Article
Attention-Focused Machine Learning Method to Provide the Stochastic Load Forecasts Needed by Electric Utilities for the Evolving Electrical Distribution System
by John O’Donnell and Wencong Su
Energies 2023, 16(15), 5661; https://doi.org/10.3390/en16155661 - 27 Jul 2023
Cited by 4 | Viewed by 2081
Abstract
Greater variation in electrical load should be expected in the future due to the increasing penetration of electric vehicles, photovoltaics, storage, and other technologies. The adoption of these technologies will vary by area and time, and if not identified early and managed by [...] Read more.
Greater variation in electrical load should be expected in the future due to the increasing penetration of electric vehicles, photovoltaics, storage, and other technologies. The adoption of these technologies will vary by area and time, and if not identified early and managed by electric utilities, these new customer needs could result in power quality, reliability, and protection issues. Furthermore, comprehensively studying the uncertainty and variation in the load on circuit elements over periods of several months has the potential to increase the efficient use of traditional resources, non-wires alternatives, and microgrids to better serve customers. To increase the understanding of electrical load, the authors propose a multistep, attention-focused, and efficient machine learning process to provide probabilistic forecasts of distribution transformer load for several months into the future. The method uses the solar irradiance, temperature, dew point, time of day, and other features to achieve up to an 86% coefficient of determination (R2). Full article
Show Figures

Figure 1

21 pages, 1765 KiB  
Article
Optimal Data-Driven Modelling of a Microbial Fuel Cell
by Mojeed Opeyemi Oyedeji, Abdullah Alharbi, Mujahed Aldhaifallah and Hegazy Rezk
Energies 2023, 16(12), 4740; https://doi.org/10.3390/en16124740 - 15 Jun 2023
Cited by 5 | Viewed by 1815
Abstract
Microbial fuel cells (MFCs) are biocells that use microorganisms as biocatalysts to break down organic matter and convert chemical energy into electrical energy. Presently, the application of MFCs as alternative energy sources is limited by their low power attribute. Optimization of MFCs is [...] Read more.
Microbial fuel cells (MFCs) are biocells that use microorganisms as biocatalysts to break down organic matter and convert chemical energy into electrical energy. Presently, the application of MFCs as alternative energy sources is limited by their low power attribute. Optimization of MFCs is very important to harness optimum energy. In this study, we develop optimal data-driven models for a typical MFC synthesized from polymethylmethacrylate and two graphite plates using machine learning algorithms including support vector regression (SVR), artificial neural networks (ANNs), Gaussian process regression (GPR), and ensemble learners. Power density and output voltage were modeled from two different datasets; the first dataset has current density and anolyte concentration as features, while the second dataset considers current density and chemical oxygen demand as features. Hyperparameter optimization was carried out on each of the considered machine learning-based models using Bayesian optimization, grid search, and random search to arrive at the best possible models for the MFC. A model was derived for power density and output voltage having 99% accuracy on testing set evaluations. Full article
Show Figures

Figure 1

37 pages, 2255 KiB  
Systematic Review
Systematic Review of Recommendation Systems for Course Selection
by Shrooq Algarni and Frederick Sheldon
Mach. Learn. Knowl. Extr. 2023, 5(2), 560-596; https://doi.org/10.3390/make5020033 - 6 Jun 2023
Cited by 6 | Viewed by 13324
Abstract
Course recommender systems play an increasingly pivotal role in the educational landscape, driving personalization and informed decision-making for students. However, these systems face significant challenges, including managing a large and dynamic decision space and addressing the cold start problem for new students. This [...] Read more.
Course recommender systems play an increasingly pivotal role in the educational landscape, driving personalization and informed decision-making for students. However, these systems face significant challenges, including managing a large and dynamic decision space and addressing the cold start problem for new students. This article endeavors to provide a comprehensive review and background to fully understand recent research on course recommender systems and their impact on learning. We present a detailed summary of empirical data supporting the use of these systems in educational strategic planning. We examined case studies conducted over the previous six years (2017–2022), with a focus on 35 key studies selected from 1938 academic papers found using the CADIMA tool. This systematic literature review (SLR) assesses various recommender system methodologies used to suggest course selection tracks, aiming to determine the most effective evidence-based approach. Full article
Show Figures

Figure 1

15 pages, 3112 KiB  
Article
Spare Parts Demand Forecasting Method Based on Intermittent Feature Adaptation
by Lilin Fan, Xia Liu, Wentao Mao, Kai Yang and Zhaoyu Song
Entropy 2023, 25(5), 764; https://doi.org/10.3390/e25050764 - 7 May 2023
Cited by 2 | Viewed by 3250
Abstract
The demand for complex equipment aftermarket parts is mostly sporadic, showing typical intermittent characteristics as a whole, resulting in the evolution law of a single demand series having insufficient information, which restricts the prediction effect of existing methods. To solve this problem, this [...] Read more.
The demand for complex equipment aftermarket parts is mostly sporadic, showing typical intermittent characteristics as a whole, resulting in the evolution law of a single demand series having insufficient information, which restricts the prediction effect of existing methods. To solve this problem, this paper proposes a prediction method of intermittent feature adaptation from the perspective of transfer learning. Firstly, to extract the intermittent features of the demand series, an intermittent time series domain partitioning algorithm is proposed by mining the demand occurrence time and demand interval information in the series, then constructing the metrics, and using a hierarchical clustering algorithm to divide all the series into different sub-source domains. Secondly, the intermittent and temporal characteristics of the sequence are combined to construct a weight vector, and the learning of common information between domains is accomplished by weighting the distance of the output features of each cycle between domains. Finally, experiments are conducted on the actual after-sales datasets of two complex equipment manufacturing enterprises. Compared with various prediction methods, the method in this paper can effectively predict future demand trends, and the prediction’s stability and accuracy are significantly improved. Full article
Show Figures

Figure 1

13 pages, 747 KiB  
Article
A Reinforcement Learning Approach for Scheduling Problems with Improved Generalization through Order Swapping
by Deepak Vivekanandan, Samuel Wirth, Patrick Karlbauer and Noah Klarmann
Mach. Learn. Knowl. Extr. 2023, 5(2), 418-430; https://doi.org/10.3390/make5020025 - 29 Apr 2023
Cited by 3 | Viewed by 2771
Abstract
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy, but also for increasing the overall efficiency. Among the different job scheduling problems, the Job Shop Scheduling Problem (JSSP) [...] Read more.
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy, but also for increasing the overall efficiency. Among the different job scheduling problems, the Job Shop Scheduling Problem (JSSP) is addressed in this work. JSSP falls into the category of NP-hard Combinatorial Optimization Problem (COP), in which solving the problem through exhaustive search becomes unfeasible. Simple heuristics such as First-In, First-Out, Largest Processing Time First and metaheuristics such as taboo search are often adopted to solve the problem by truncating the search space. The viability of the methods becomes inefficient for large problem sizes as it is either far from the optimum or time consuming. In recent years, the research towards using Deep Reinforcement Learning (DRL) to solve COPs has gained interest and has shown promising results in terms of solution quality and computational efficiency. In this work, we provide an novel approach to solve the JSSP examining the objectives generalization and solution effectiveness using DRL. In particular, we employ the Proximal Policy Optimization (PPO) algorithm that adopts the policy-gradient paradigm that is found to perform well in the constrained dispatching of jobs. We incorporated a new method called Order Swapping Mechanism (OSM) in the environment to achieve better generalized learning of the problem. The performance of the presented approach is analyzed in depth by using a set of available benchmark instances and comparing our results with the work of other groups. Full article
Show Figures

Figure 1

22 pages, 4053 KiB  
Article
An Optimal Scheduling Method for an Integrated Energy System Based on an Improved k-Means Clustering Algorithm
by Fan Li, Jingxi Su and Bo Sun
Energies 2023, 16(9), 3713; https://doi.org/10.3390/en16093713 - 26 Apr 2023
Cited by 5 | Viewed by 1419
Abstract
This study proposes an optimal scheduling method for complex integrated energy systems. The proposed method employs a heuristic algorithm to maximize its energy, economy, and environment indices and optimize the system operation plan. It uses the k-means combined with box plots (Imk-means) to [...] Read more.
This study proposes an optimal scheduling method for complex integrated energy systems. The proposed method employs a heuristic algorithm to maximize its energy, economy, and environment indices and optimize the system operation plan. It uses the k-means combined with box plots (Imk-means) to improve the convergence speed of the heuristic algorithm by forming its initial conditions. Thus, the optimization scheduling speed is enhanced. First of all, considering the system source and load factors, the Imk-means is presented to find the typical and extreme days in a historical optimization dataset. The output results for these typical and extreme days can represent common and abnormal optimization results, respectively. Thus, based on the representative historical data, a traditional heuristic algorithm with an initial solution set, such as the genetic algorithm, can be accelerated greatly. Secondly, the initial populations of the genetic algorithm are dispersed at the historical outputs of the typical and extreme days, and many random populations are supplemented simultaneously. Finally, the improved genetic algorithm performs the solution process faster to find optimal results and can possibly prevent the results from falling into local optima. A case study was conducted to verify the effectiveness of the proposed method. The results show that the proposed method can decrease the running time by up to 89.29% at the most, and 72.68% on average, compared with the traditional genetic algorithm. Meanwhile, the proposed method has a slightly increased optimization index, indicating no loss of optimization accuracy during acceleration. It can also indicate that the proposed method does not fall into local optima, as it has fewer iterations. Full article
Show Figures

Figure 1

14 pages, 937 KiB  
Article
Reviving the Dynamics of Attacked Reservoir Computers
by Ruizhi Cao, Chun Guan, Zhongxue Gan and Siyang Leng
Entropy 2023, 25(3), 515; https://doi.org/10.3390/e25030515 - 16 Mar 2023
Cited by 3 | Viewed by 1693
Abstract
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique [...] Read more.
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique in clinical practice, we propose a novel framework of reviving attacked reservoir computers, consisting of several strategies direct at different types of attacks on structure by adjusting only a minor fraction of edges in the reservoir. Numerical experiments demonstrate the efficacy and broad applicability of the framework and reveal inspiring insights into the mechanisms. This work provides a vehicle to improve the robustness of reservoir computers and can be generalized to broader types of neural networks. Full article
Show Figures

Figure 1

15 pages, 1140 KiB  
Article
Implicit Solutions of the Electrical Impedance Tomography Inverse Problem in the Continuous Domain with Deep Neural Networks
by Thilo Strauss and Taufiquar Khan
Entropy 2023, 25(3), 493; https://doi.org/10.3390/e25030493 - 13 Mar 2023
Cited by 1 | Viewed by 2224
Abstract
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical [...] Read more.
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical instability is still a major hurdle due to many factors, including the discretization error of the problem. Furthermore, most algorithms with good performance are relatively time consuming and do not allow real-time applications. In our approach, the goal is to separate the unknown conductivity into two regions, namely the region of homogeneous background conductivity and the region of non-homogeneous conductivity. Therefore, we pose and solve the problem of shape reconstruction using machine learning. We propose a novel and simple jet intriguing neural network architecture capable of solving the EIT inverse problem. It addresses previous difficulties, including instability, and is easily adaptable to other ill-posed coefficient inverse problems. That is, the proposed model estimates the probability for a point of whether the conductivity belongs to the background region or to the non-homogeneous region on the continuous space RdΩ with d{2,3}. The proposed model does not make assumptions about the forward model and allows for solving the inverse problem in real time. The proposed machine learning approach for shape reconstruction is also used to improve gradient-based methods for estimating the unknown conductivity. In this paper, we propose a piece-wise constant reconstruction method that is novel in the inverse problem setting but inspired by recent approaches from the 3D vision community. We also extend this method into a novel constrained reconstruction method. We present extensive numerical experiments to show the performance of the architecture and compare the proposed method with previous analytic algorithms, mainly the monotonicity-based shape reconstruction algorithm and iteratively regularized Gauss–Newton method. Full article
Show Figures

Figure 1

23 pages, 5473 KiB  
Article
Feature Selection Using New Version of V-Shaped Transfer Function for Salp Swarm Algorithm in Sentiment Analysis
by Dinar Ajeng Kristiyanti, Imas Sukaesih Sitanggang, Annisa and Sri Nurdiati
Cited by 11 | Viewed by 2604
Abstract
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a [...] Read more.
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a binary version of a metaheuristic optimization algorithm based on Swarm Intelligence, namely the Salp Swarm Algorithm (SSA), as feature selection in sentiment analysis. (2) Methods: Significant feature subsets were selected using the SSA. Transfer functions with various types of the form S-TF, V-TF, X-TF, U-TF, Z-TF, and the new type V-TF with a simpler mathematical formula are used as a binary version approach to enable search agents to move in the search space. The stages of the study include data pre-processing, feature selection using SSA-TF and other conventional feature selection methods, modelling using K-Nearest Neighbor (KNN), Support Vector Machine, and Naïve Bayes, and model evaluation. (3) Results: The results showed an increase of 31.55% to the best accuracy of 80.95% for the KNN model using SSA-based New V-TF. (4) Conclusions: We have found that SSA-New V3-TF is a feature selection method with the highest accuracy and less runtime compared to other algorithms in sentiment analysis. Full article
Show Figures

Figure 1

28 pages, 7982 KiB  
Article
Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology
by Dongming Yan, Yue Liu, Lijuan Li, Xuezhu Lin and Lili Guo
Entropy 2023, 25(3), 450; https://doi.org/10.3390/e25030450 - 4 Mar 2023
Cited by 2 | Viewed by 1729
Abstract
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To [...] Read more.
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To solve this problem, in this study, we first introduced a Poisson-like randomness strategy and an enhanced randomness strategy to improve the remora optimization algorithm (ROA), i.e., the PROA. Simultaneously, its convergence speed and robustness were verified in different dimensions using the CEC benchmark function. The convergence speed of 67.5–74% of the results is better than the ROA, and the robustness results of 66.67–75% are better than those of the ROA. Second, a deployment model was established for the large-scale measurement field to obtain the maximum visible area of the target to be measured. Finally, the PROA was used as the optimizer to solve optimal deployment planning; the performance of the PROA was verified by simulation analysis. In the case of six stations, the maximum visible area of the PROA reaches 83.02%, which is 18.07% higher than that of the ROA. Compared with the traditional method, this model shortens the deployment time and calculates the overall accessibility, which is of practical significance for improving assembly efficiency in large-size measurement field environments. Full article
Show Figures

Figure 1

19 pages, 1825 KiB  
Review
Introduction of Materials Genome Technology and Its Applications in the Field of Biomedical Materials
by Yashi Qiu, Zhaoying Wu, Jiali Wang, Chao Zhang and Heye Zhang
Materials 2023, 16(5), 1906; https://doi.org/10.3390/ma16051906 - 25 Feb 2023
Cited by 5 | Viewed by 2298
Abstract
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this [...] Read more.
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this paper, the basic concepts involved in the MGT are introduced, and the applications of MGT in the R&D of metallic, inorganic non-metallic, polymeric, and composite biomedical materials are summarized; in view of the existing limitations of MGT for R&D of biomedical materials, potential strategies are proposed on the establishment and management of material databases, the upgrading of high-throughput experimental technology, the construction of data mining prediction platforms, and the training of relevant materials talents. In the end, future trend of MGT for R&D of biomedical materials is proposed. Full article
Show Figures

Figure 1

23 pages, 3571 KiB  
Article
Parametric Analysis of Thick FGM Plates Based on 3D Thermo-Elasticity Theory: A Proper Generalized Decomposition Approach
by Mohammad-Javad Kazemzadeh-Parsi, Amine Ammar and Francisco Chinesta
Materials 2023, 16(4), 1753; https://doi.org/10.3390/ma16041753 - 20 Feb 2023
Cited by 4 | Viewed by 1818
Abstract
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal [...] Read more.
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal environment. In the present work, material gradation is considered in one, two and three directions, and 3D heat transfer and theory of elasticity equations are solved to have an accurate temperature field and be able to consider all shear deformations. A parametric analysis of FGM materials is especially useful in material design and optimization. In the PGD technique, the field variables are separated to a set of univariate functions, and the high-dimensional governing equations reduce to a set of one-dimensional problems. Due to the curse of dimensionality, solving a high-dimensional parametric problem is considerably more computationally intensive than solving a set of one-dimensional problems. Therefore, the PGD makes it possible to handle high-dimensional problems efficiently. In the present work, some sample examples in 4D and 5D computational spaces are solved, and the results are presented. Full article
Show Figures

Figure 1

10 pages, 545 KiB  
Article
Quick Estimate of Information Decomposition for Text Style Transfer
by Viacheslav Shibaev, Eckehard Olbrich, Jürgen Jost and Ivan P. Yamshchikov
Entropy 2023, 25(2), 322; https://doi.org/10.3390/e25020322 - 10 Feb 2023
Cited by 1 | Viewed by 1669
Abstract
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to [...] Read more.
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to assess the quality of information decomposition for latent representations in the context of style transfer. Experimenting with several state-of-the-art models, we demonstrate that such estimates could be used as a fast and straightforward health check for the models instead of more laborious empirical experiments. Full article
Show Figures

Figure 1

20 pages, 1713 KiB  
Review
A Survey on the Application of Machine Learning in Turbulent Flow Simulations
by Maciej Majchrzak, Katarzyna Marciniak-Lukasiak and Piotr Lukasiak
Energies 2023, 16(4), 1755; https://doi.org/10.3390/en16041755 - 9 Feb 2023
Cited by 6 | Viewed by 2605
Abstract
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such [...] Read more.
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such as atmospheric phenomena and engineering calculations made hand computation impractical. The dawn of the computer age also marked the beginning of computational fluid mechanics and their subsequent popularization made computational fluid dynamics one of the common tools used in science and engineering. From the beginning, however, the method has faced a trade-off between accuracy and computational requirements. The purpose of this work is to examine how the results of recent advances in machine learning can be applied to further develop the seemingly plateaued method. Examples of applying this method to improve various types of computational flow simulations, both by increasing the accuracy of the results obtained and reducing calculation times, have been reviewed in the paper as well as the effectiveness of the methods presented, the chances of their acceptance by industry, including possible obstacles, and potential directions for their development. One can observe an evolution of solutions from simple determination of closure coefficients through to more advanced attempts to use machine learning as an alternative to the classical methods of solving differential equations on which computational fluid dynamics is based up to turbulence models built solely from neural networks. A continuation of these three trends may lead to at least a partial replacement of Navier–Stokes-based computational fluid dynamics by machine-learning-based solutions. Full article
Show Figures

Figure 1

14 pages, 3393 KiB  
Article
Predicting Terrestrial Heat Flow in North China Using Multiple Geological and Geophysical Datasets Based on Machine Learning Method
by Shan Xu, Chang Ni and Xiangyun Hu
Energies 2023, 16(4), 1620; https://doi.org/10.3390/en16041620 - 6 Feb 2023
Cited by 4 | Viewed by 1775
Abstract
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed [...] Read more.
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed to study the regional geothermal setting. This research is significant in order to provide a new reliable map of terrestrial heat flow for the subsequent development of geothermal resources. The Gradient Boosted Regression Tree (GBRT) prediction model used in this paper is devoted to solving the problem of an insufficient number of heat flow observations in North China. It considers the geological and geophysical information in the region by training the sample data using 12 kinds of geological and geophysical features. Finally, a robust GBRT prediction model was obtained. The performance of the GBRT method was evaluated by comparing it with the kriging interpolation, the minimum curvature interpolation, and the 3D interpolation algorithm through the prediction performance analysis. Based on the GBRT prediction model, a new heat flow map with a resolution of 0.25°×0.25° was proposed, which depicted the terrestrial heat flow distribution in the study area in a more detailed and reasonable way than the interpolation results. The high heat flow values were mostly concentrated in the northeastern boundary of the Tibet Plateau, with a few scattered and small-scale high heat flow areas in the southeastern part of the North China Craton (NCC) adjacent to the Pacific Ocean. The low heat flow values were mainly resolved in the northern part of the Trans-North China Orogenic belt (TNCO) and the southmost part of the NCC. By comparing the predicted heat flow map with the plate tectonics, the olivine-Mg#, and the hot spring distribution in North China, we found that the GBRT could obtain a reliable result under the constraint of geological and geophysical information in regions with scarce and unevenly distributed heat flow observations. Full article
Show Figures

Figure 1

15 pages, 3413 KiB  
Article
Mobile Application for Tomato Plant Leaf Disease Detection Using a Dense Convolutional Network Architecture
by Intan Nurma Yulita, Naufal Ariful Amri and Akik Hidayat
Cited by 10 | Viewed by 4538
Abstract
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be [...] Read more.
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be preserved with the aid of computer technology. It can identify diseases in tomato plant leaves. An algorithm for deep learning with a DenseNet architecture was implemented in this study. Multiple hyperparameter tests were conducted to determine the optimal model. Using two hidden layers, a DenseNet trainable layer on dense block 5, and a dropout rate of 0.4, the optimal model was constructed. The 10-fold cross-validation evaluation of the model yielded an accuracy value of 95.7 percent and an F1-score of 95.4 percent. To recognize tomato plant leaves, the model with the best assessment results was implemented in a mobile application. Full article
Show Figures

Figure 1

15 pages, 799 KiB  
Article
Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace
by Xi Zhang, Guiyun Zhang, Dong Zhang and Liping Zhang
Materials 2023, 16(3), 1164; https://doi.org/10.3390/ma16031164 - 30 Jan 2023
Cited by 9 | Viewed by 1747 | Correction
Abstract
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to [...] Read more.
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to the complex structure, long transportation distance of raw materials, and high cost. To overcome these issues, the brazier-type gasification and carbonization furnace is designed to carry out dry distillation, anaerobic carbonization and have a high carbonization rate under high-temperature conditions. To improve the operation and maintenance efficiency, we formulate the operation of the brazier-type gasification and carbonization furnace as a dynamic multi-objective optimization problem (DMOP). Firstly, we analyze the dynamic factors in the work process of the brazier-type gasification and carbonization furnace, such as the equipment capacity, the operating conditions, and the biomass treated by the furnace. Afterward, we select the biochar yield and carbon monoxide emission as the dynamic objectives and model the DMOP. Finally, we apply three dynamic multiobjective evolutionary algorithms to solve the optimization problem so as to verify the effectiveness of the dynamic optimization approach in the gasification and carbonization furnace. Full article
Show Figures

Figure 1

17 pages, 593 KiB  
Article
Optimizing Automated Trading Systems with Deep Reinforcement Learning
by Minh Tran, Duc Pham-Hi and Marc Bui
Algorithms 2023, 16(1), 23; https://doi.org/10.3390/a16010023 - 1 Jan 2023
Cited by 11 | Viewed by 8283
Abstract
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency [...] Read more.
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency market. Considering two simple objective functions, cumulative return and Sharpe ratio, the results showed that Deep Reinforcement Learning approach with Double Deep Q-Network setting and the Bayesian Optimization approach can provide positive average returns. Among the settings being studied, Double Deep Q-Network setting with Sharpe ratio as reward function is the best Q-learning trading system. With a daily trading goal, the system shows outperformed results in terms of cumulative return, volatility and execution time when compared with the Bayesian Optimization approach. This helps traders to make quick and efficient decisions with the latest information from the market. In long-term trading, Bayesian Optimization is a method of parameter optimization that brings higher profits. Deep Reinforcement Learning provides solutions to the high-dimensional problem of Bayesian Optimization in upcoming studies such as optimizing portfolios with multiple assets and diverse trading strategies. Full article
Show Figures

Figure 1

22 pages, 1687 KiB  
Article
Improved Anomaly Detection by Using the Attention-Based Isolation Forest
by Lev Utkin, Andrey Ageev, Andrei Konstantinov and Vladimir Muliukha
Algorithms 2023, 16(1), 19; https://doi.org/10.3390/a16010019 - 28 Dec 2022
Cited by 3 | Viewed by 3921
Abstract
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly [...] Read more.
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly detection problem. The main idea underlying the modification is the assignment of attention weights to each path of trees with learnable parameters depending on the instances and trees themselves. Huber’s contamination model is proposed to be used to define the attention weights and their parameters. As a result, the attention weights are linearly dependent on learnable attention parameters that are trained by solving a standard linear or quadratic optimization problem. ABIForest can be viewed as the first modification of the isolation forest to incorporate an attention mechanism in a simple way without applying gradient-based algorithms. Numerical experiments with synthetic and real datasets illustrate that the results of ABIForest outperform those of other methods. The code of the proposed algorithms has been made available. Full article
Show Figures

Figure 1

13 pages, 432 KiB  
Article
Forecasting for Chaotic Time Series Based on GRP-lstmGAN Model: Application to Temperature Series of Rotary Kiln
by Wenyu Hu and Zhizhong Mao
Entropy 2023, 25(1), 52; https://doi.org/10.3390/e25010052 - 27 Dec 2022
Cited by 10 | Viewed by 1607
Abstract
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is [...] Read more.
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is improved by analyzing the essential characteristics of time series. However, the existing prediction methods of chaotic time series cannot fully consider the local and global characteristics of time series at the same time. Therefore, in this study, the global recurrence plot (GRP)-based generative adversarial network (GAN) and the long short-term memory (LSTM) combination method, named GRP-lstmGAN, are proposed, which can effectively display important information about time scales. First, the data is subjected to a series of pre-processing operations, including data smoothing. Then, transforming one-dimensional time series into two-dimensional images by GRP makes full use of the global and local information of time series. Finally, the combination of LSTM and improves GAN models for temperature time series prediction. The experimental results show that our model is better than comparison models. Full article
Show Figures

Figure 1

15 pages, 1588 KiB  
Article
Cluster-Based Structural Redundancy Identification for Neural Network Compression
by Tingting Wu, Chunhe Song, Peng Zeng and Changqing Xia
Entropy 2023, 25(1), 9; https://doi.org/10.3390/e25010009 - 21 Dec 2022
Cited by 2 | Viewed by 1968
Abstract
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing [...] Read more.
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing unimportant filters. This paper reconsiders model pruning from the perspective of structural redundancy, claiming that identifying functionally similar filters plays a more important role, and proposes a model pruning framework for clustering-based redundancy identification. First, we perform cluster analysis on the filters of each layer to generate similar sets with different functions. We then propose a criterion for identifying redundant filters within similar sets. Finally, we propose a pruning scheme that automatically determines the pruning rate of each layer. Extensive experiments on various benchmark network architectures and datasets demonstrate the effectiveness of our proposed framework. Full article
Show Figures

Figure 1

19 pages, 1030 KiB  
Article
A Dual-Population-Based NSGA-III for Constrained Many-Objective Optimization
by Huantong Geng, Zhengli Zhou, Junye Shen and Feifei Song
Entropy 2023, 25(1), 13; https://doi.org/10.3390/e25010013 - 21 Dec 2022
Cited by 5 | Viewed by 1995
Abstract
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting [...] Read more.
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting objectives and constraints. This might lead to the population being stuck at some locally optimal or locally feasible regions. To alleviate the above challenges, we proposed a dual-population-based NSGA-III, named DP-NSGA-III, where the two populations exchange information through the offspring. The main population based on the NSGA-III solves CMaOPs and the auxiliary populations with different environment selection ignore the constraints. In addition, we designed an ε-constraint handling method in combination with NSGA-III, aiming to exploit the excellent infeasible solutions in the main population. The proposed DP-NSGA-III is compared with four state-of-the-art CMaOEAs on a series of benchmark problems. The experimental results show that the proposed evolutionary algorithm is highly competitive in solving CMaOPs. Full article
Show Figures

Figure 1

16 pages, 386 KiB  
Article
Initial Solution Generation and Diversified Variable Picking in Local Search for (Weighted) Partial MaxSAT
by Zaijun Zhang, Jincheng Zhou, Xiaoxia Wang, Heng Yang and Yi Fan
Entropy 2022, 24(12), 1846; https://doi.org/10.3390/e24121846 - 18 Dec 2022
Cited by 1 | Viewed by 1781
Abstract
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the [...] Read more.
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the (W)PMS problem. Our initialization strategy is based on a novel definition of variables’ structural entropy, and it aims to generate a solution that is close to a high-quality feasible one. Then, our diversification strategy picks a variable in two possible ways, depending on a parameter: continuing to pick variables with the best benefits or focusing on a clause with the greatest penalty and then selecting variables probabilistically. Based on these strategies, we developed a local search solver dubbed ImSATLike, as well as a hybrid solver ImSATLike-TT, and experimental results on (weighted) partial MaxSAT instances in recent MaxSAT Evaluations show that they outperform or have nearly the same performances as state-of-the-art local search and hybrid competitors, respectively, in general. Furthermore, we carried out experiments to confirm the individual impacts of each proposed strategy. Full article
26 pages, 6301 KiB  
Article
Advanced Spatial and Technological Aggregation Scheme for Energy System Models
by Shruthi Patil, Leander Kotzur and Detlef Stolten
Energies 2022, 15(24), 9517; https://doi.org/10.3390/en15249517 - 15 Dec 2022
Cited by 8 | Viewed by 1762
Abstract
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system [...] Read more.
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system models. To that end, a novel two-step aggregation scheme is introduced. First, model regions are spatially aggregated to obtain a reduced region set. The aggregation is based on model parameters such as VRES time series, capacities, etc. In addition, spatial contiguity of regions is considered. Next, technological aggregation is performed on each VRES, in each region, based on their time series. The aggregations’ impact on accuracy and complexity of a cost-optimal, European energy system model is analyzed. The model is aggregated to obtain different combinations of numbers of regions and VRES types. Results are benchmarked against an initial resolution of 96 regions, with 68 VRES types in each. System cost deviates significantly when lower numbers of regions and/or VRES types are considered. As spatial and technological resolutions increase, the cost fluctuates initially and stabilizes eventually, approaching the benchmark. Optimal combination is determined based on an acceptable cost deviation of <5% and the point of stabilization. A total of 33 regions with 38 VRES types in each is deemed optimal. Here, the cost is underestimated by 4.42%, but the run time is reduced by 92.95%. Full article
Show Figures

Figure 1

18 pages, 2772 KiB  
Article
Curriculum Reinforcement Learning Based on K-Fold Cross Validation
by Zeyang Lin, Jun Lai, Xiliang Chen, Lei Cao and Jun Wang
Entropy 2022, 24(12), 1787; https://doi.org/10.3390/e24121787 - 6 Dec 2022
Cited by 14 | Viewed by 2673
Abstract
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert [...] Read more.
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert experience and a single network, which has the problems of difficult curriculum task ranking and slow convergence speed. In this paper, we propose a curriculum reinforcement learning method based on K-Fold Cross Validation that can estimate the relativity score of task curriculum difficulty. Drawing lessons from the human concept of curriculum learning from easy to difficult, this method divides automatic curriculum learning into a curriculum difficulty assessment stage and a curriculum sorting stage. Through parallel training of the teacher model and cross-evaluation of task sample difficulty, the method can better sequence curriculum learning tasks. Finally, simulation comparison experiments were carried out in two types of multi-agent experimental environments. The experimental results show that the automatic curriculum learning method based on K-Fold cross-validation can improve the training speed of the MADDPG algorithm, and at the same time has a certain generality for multi-agent deep reinforcement learning algorithm based on the replay buffer mechanism. Full article
Show Figures

Figure 1

14 pages, 1271 KiB  
Article
Applications of Virtual Machine Using Multi-Objective Optimization Scheduling Algorithm for Improving CPU Utilization and Energy Efficiency in Cloud Computing
by Rajkumar Choudhary and Suresh Perinpanayagam
Energies 2022, 15(23), 9164; https://doi.org/10.3390/en15239164 - 2 Dec 2022
Cited by 8 | Viewed by 1958
Abstract
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of [...] Read more.
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of scheduled gaps, the total execution time in a workflow can be decreased by placing uncompleted tasks in the gaps through approximate computations. In the current research, a novel approach based on multi-objective optimization is utilized with CloudSim as the underlying simulator in order to evaluate the VM (virtual machine) allocation performance. In this study, we determine the energy consumption, CPU utilization, and number of executed instructions in each scheduling interval for complex VM scheduling solutions to improve the energy efficiency and reduce the execution time. Finally, based on the simulation results and analyses, all of the tested parameters are simulated and evaluated with a proper validation in CloudSim. Based on the results, multi-objective PSO (particle swarm optimization) optimization can achieve better and more efficient effects for different parameters than multi-objective GA (genetic algorithm) optimization can. Full article
Show Figures

Figure 1

22 pages, 2191 KiB  
Article
Improved Black Widow Spider Optimization Algorithm Integrating Multiple Strategies
by Chenxin Wan, Bitao He, Yuancheng Fan, Wei Tan, Tao Qin and Jing Yang
Entropy 2022, 24(11), 1640; https://doi.org/10.3390/e24111640 - 11 Nov 2022
Cited by 13 | Viewed by 1978
Abstract
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced [...] Read more.
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced to initialize the population to ensure the diversity of the algorithm at the initial stage. Then, the sine cosine strategy is introduced to perturb the individuals during iteration to improve the global search ability of the algorithm. In addition, the elite opposition-based learning strategy is introduced to improve convergence speed of algorithm. Finally, the mutation method of the differential evolution algorithm is integrated to reorganize the individuals with poor fitness values. Through the analysis of the optimization results of 13 benchmark test functions and a part of CEC2017 test functions, the effectiveness and rationality of each improved strategy are verified. Moreover, it shows that the proposed algorithm has significant improvement in solution accuracy, performance and convergence speed compared with other algorithms. Furthermore, the IBWOA algorithm is used to solve six practical constrained engineering problems. The results show that the IBWOA has excellent optimization ability and scalability. Full article
Show Figures

Figure 1

12 pages, 3383 KiB  
Article
An HGA-LSTM-Based Intelligent Model for Ore Pulp Density in the Hydrometallurgical Process
by Guobin Zou, Junwu Zhou, Kang Li and Hongliang Zhao
Materials 2022, 15(21), 7586; https://doi.org/10.3390/ma15217586 - 28 Oct 2022
Cited by 2 | Viewed by 1441
Abstract
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming [...] Read more.
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming at the problem of accurately measuring the feed ore pulp density, we proposed a new intelligent model based on the long short-term memory (LSTM) and hybrid genetic algorithm (HGA). Specifically, the HGA refers to a novel optimization search algorithm model that can optimize the hyperparameters and improve the modeling performance of the LSTM. Finally, the proposed intelligent model was successfully applied to an actual thickener case in China. The intelligent model prediction results demonstrated that the hybrid model outperformed other models and satisfied the measurement accuracy requirements in the factory well. Full article
Show Figures

Figure 1

17 pages, 7018 KiB  
Article
Research on Joint Resource Allocation for Multibeam Satellite Based on Metaheuristic Algorithms
by Wei Gao, Lei Wang and Lianzheng Qu
Entropy 2022, 24(11), 1536; https://doi.org/10.3390/e24111536 - 26 Oct 2022
Viewed by 1915
Abstract
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal [...] Read more.
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal solutions has become a hot topic in research and has the potential to be explored further. In particular, the treatment of invalid solutions is the key to algorithm performance. At present, the unused bandwidth allocation (UBA) method is commonly used to address the bandwidth constraint in the DRM problem. However, this method reduces the algorithm’s flexibility in the solution space, diminishes the quality of the optimized solution, and increases the computational complexity. In this paper, we propose a bandwidth constraint handling approach based on the non-dominated beam coding (NDBC) method, which can eliminate the bandwidth overlap constraint in the algorithm’s population evolution and achieve complete bandwidth flexibility in order to increase the quality of the optimal solution while decreasing the computational complexity. We develop a generic application architecture for metaheuristic algorithms using the NDBC method and successfully apply it to four typical algorithms. The results indicate that NDBC can enhance the quality of the optimized solution by 9–33% while simultaneously reducing computational complexity by 9–21%. Full article
Show Figures

Figure 1

19 pages, 3927 KiB  
Article
Model NOx, SO2 Emissions Concentration and Thermal Efficiency of CFBB Based on a Hyper-Parameter Self-Optimized Broad Learning System
by Yunpeng Ma, Chenheng Xu, Hua Wang, Ran Wang, Shilin Liu and Xiaoying Gu
Energies 2022, 15(20), 7700; https://doi.org/10.3390/en15207700 - 18 Oct 2022
Cited by 4 | Viewed by 1708
Abstract
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the [...] Read more.
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the boiler combustion process makes it difficult to model it using traditional mathematical methods. In this paper, a kind of hyper-parameter self-optimized broad learning system by a sparrow search algorithm is proposed to model the NOx, SO2 emissions concentration and thermal efficiency of a circulation fluidized bed boiler (CFBB). A broad learning system (BLS) is a novel neural network algorithm, which shows good performance in multidimensional feature learning. However, the BLS has several hyper-parameters to be set in a wide range, so that the optimal combination between hyper-parameters is difficult to determine. This paper uses a sparrow search algorithm (SSA) to select the optimal hyper-parameters combination of the broad learning system, namely as SSA-BLS. To verify the effectiveness of SSA-BLS, ten benchmark regression datasets are applied. Experimental results show that SSA-BLS obtains good regression accuracy and model stability. Additionally, the proposed SSA-BLS is applied to model the combustion process parameters of a 330 MW circulating fluidized bed boiler. Experimental results reveal that SSA-BLS can establish the accurate prediction models for thermal efficiency, NOx emission concentration and SO2 emission concentration, separately. Altogether, SSA-BLS is an effective modelling method. Full article
Show Figures

Graphical abstract

13 pages, 2377 KiB  
Article
A Pattern-Recognizer Artificial Neural Network for the Prediction of New Crescent Visibility in Iraq
by Ziyad T. Allawi
Computation 2022, 10(10), 186; https://doi.org/10.3390/computation10100186 - 13 Oct 2022
Cited by 6 | Viewed by 2382
Abstract
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories [...] Read more.
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories use interpolation and extrapolation techniques to identify sighting regions through such data. In this study, a pattern recognizer artificial neural network was trained to distinguish between visibility regions. Essential parameters of crescent moon sighting were collected from moon sight datasets and used to build an intelligent system of pattern recognition to predict the crescent sight conditions. The proposed ANN learned the datasets with an accuracy of more than 72% in comparison to the actual observational results. ANN simulation gives a clear insight into three crescent moon visibility regions: invisible (I), probably visible (P), and certainly visible (V). The proposed ANN is suitable for building lunar calendars, so it was used to build a four-year calendar on the horizon of Baghdad. The built calendar was compared with the official Hijri calendar in Iraq. Full article
Show Figures

Figure 1

22 pages, 4366 KiB  
Article
Shear Strength Prediction Model for RC Exterior Joints Using Gene Expression Programming
by Moiz Tariq, Azam Khan and Asad Ullah
Materials 2022, 15(20), 7076; https://doi.org/10.3390/ma15207076 - 12 Oct 2022
Cited by 3 | Viewed by 1906
Abstract
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant [...] Read more.
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant input parameters using 253 tests were extracted from the literature to carry out a knowledge analysis of GEP. The database was further divided into two portions: 152 exterior joint experiments with joint transverse reinforcements and 101 unreinforced joint specimens. Moreover, the effects of different material and geometric factors (usually ignored in the available models) were incorporated into the proposed models. These factors are beam and column geometries, concrete and steel material properties, longitudinal and shear reinforcements, and column axial loads. Statistical analysis and comparisons with previously proposed analytical and empirical models indicate a high degree of accuracy of the proposed models, rendering them ideal for practical application. Full article
Show Figures

Figure 1

11 pages, 2520 KiB  
Article
Analysis of Vulnerability on Weighted Power Networks under Line Breakdowns
by Lixin Yang, Ziyu Gu, Yuanchen Dang and Peiyan He
Entropy 2022, 24(10), 1449; https://doi.org/10.3390/e24101449 - 11 Oct 2022
Cited by 5 | Viewed by 1567
Abstract
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted [...] Read more.
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted situations in the real world. This paper investigates the vulnerability of weighted power networks. Firstly, we propose a more practical capacity model to investigate the cascading failure of weighted power networks under different attack strategies. Results show that the smaller threshold of the capacity parameter can enhance the vulnerability of weighted power networks. Furthermore, a weighted electrical cyber-physical interdependent network is developed to study the vulnerability and failure dynamics of the entire power network. We perform simulations in the IEEE 118 Bus case to evaluate the vulnerability under various coupling schemes and different attack strategies. Simulation results show that heavier loads increase the likelihood of blackouts and that different coupling strategies play a crucial role in the cascading failure performance. Full article
Show Figures

Figure 1

18 pages, 1788 KiB  
Article
An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning
by Zhiyu Chen, Jianyu Ding, Fei Wu, Chi Zhang, Yiming Sun, Jing Sun, Shangdong Liu and Yimu Ji
Entropy 2022, 24(10), 1377; https://doi.org/10.3390/e24101377 - 27 Sep 2022
Cited by 3 | Viewed by 1836
Abstract
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical [...] Read more.
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack. Full article
Show Figures

Figure 1

15 pages, 2387 KiB  
Article
Dynamic Programming BN Structure Learning Algorithm Integrating Double Constraints under Small Sample Condition
by Zhigang Lv, Yiwei Chen, Ruohai Di, Hongxi Wang, Xiaojing Sun, Chuchao He and Xiaoyan Li
Entropy 2022, 24(10), 1354; https://doi.org/10.3390/e24101354 - 24 Sep 2022
Viewed by 1523
Abstract
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this [...] Read more.
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this paper studies the planning mode and connotation of dynamic programming, restricts its process with edge and path constraints, and proposes a dynamic programming BN structure learning algorithm with double constraints under small sample conditions. The algorithm uses double constraints to limit the planning process of dynamic programming and reduces the planning space. Then, it uses double constraints to limit the selection of the optimal parent node to ensure that the optimal structure conforms to prior knowledge. Finally, the integrating prior-knowledge method and the non-integrating prior-knowledge method are simulated and compared. The simulation results verify the effectiveness of the method proposed and prove that the integrating prior knowledge can significantly improve the efficiency and accuracy of BN structure learning. Full article
Show Figures

Figure 1

26 pages, 15176 KiB  
Review
Optimization-Based High-Frequency Circuit Miniaturization through Implicit and Explicit Constraint Handling: Recent Advances
by Anna Pietrenko-Dabrowska, Slawomir Koziel and Marzieh Mahrokh
Energies 2022, 15(19), 6955; https://doi.org/10.3390/en15196955 - 22 Sep 2022
Cited by 4 | Viewed by 1457
Abstract
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. [...] Read more.
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. On the practical side, the challenges related to handling design constraints are aggravated by the high cost of system evaluation, normally requiring full-wave electromagnetic (EM) analysis. Some of these issues can be alleviated by implicit constraint handling using the penalty function approach. Yet, its performance depends on the arrangement of the penalty factors, necessitating a costly trial-and-error procedure to identify their optimum setup. A workaround is offered by the recently proposed algorithms with automatic adaptation of the penalty factors using different adjustment schemes. However, these intricate strategies require a continuous problem-dependent adaptation of the penalty function throughout the entire optimization process. Alternative methodologies have been proposed by taking an explicit approach to handle the inequality constraints, along with correction-based control over equality conditions, the combination of which proves to be demonstrably competitive for some miniaturization tasks. Nevertheless, optimization-based miniaturization, whether using implicit or explicit constraint handling, remains a computationally expensive task. A reliable way of reducing the aforementioned costs is the incorporation of multi-resolution EM fidelity models into the miniaturization procedure. Therein, the principal operation is based on the simultaneous monitoring of factors such as quality of the constraint satisfaction, as well as algorithm convergence status. This paper provides an overview of the abovementioned size-reduction algorithms, in which theoretical considerations are illustrated using a number of antenna and microwave circuit case studies. Full article
Show Figures

Figure 1

11 pages, 1459 KiB  
Article
Sensor Fusion for Occupancy Estimation: A Study Using Multiple Lecture Rooms in a Complex Building
by Cédric Roussel, Klaus Böhm and Pascal Neis
Mach. Learn. Knowl. Extr. 2022, 4(3), 803-813; https://doi.org/10.3390/make4030039 - 16 Sep 2022
Cited by 3 | Viewed by 2518
Abstract
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room [...] Read more.
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room occupancy is a major factor. The estimation can benefit visitor management systems in real time, but can also be predictive of room reservation strategies. By using different terminal and non-terminal sensors in different premises of varying sizes, this paper aims to estimate room occupancy. In the process, the proposed models are trained with different combinations of rooms in training and testing datasets to examine distinctions in the infrastructure of the considered building. The results indicate that the estimation benefits from a combination of different sensors. Additionally, it is found that a model should be trained with data from every room in a building and cannot be transferred to other rooms. Full article
Show Figures

Figure 1

22 pages, 4992 KiB  
Article
A Period-Based Neural Network Algorithm for Predicting Building Energy Consumption of District Heating
by Zhengchao Xie, Xiao Wang, Lijun Zheng, Hao Chang and Fei Wang
Energies 2022, 15(17), 6338; https://doi.org/10.3390/en15176338 - 30 Aug 2022
Viewed by 1571
Abstract
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a [...] Read more.
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a period-based neural network (PBNN) is proposed to predict building energy consumption. The main innovation of PBNN is the introduction of a new data structure, which is a time-discontinuous sliding window. The sliding window consists of the past 24 h, 24 h for the same period last week, and 24 h for the same period the previous year. When predicting the building energy consumption for the next 1 h, 12 h, and 24 h, the prediction errors of the PBNN are 2.30%, 3.47%, and 3.66% lower than those of the traditional sliding window PBNN (TSW-PBNN), respectively. The training time of PBNN is approximately half that of TSW-PBNN. The time-discontinuous sliding window reduces the energy consumption prediction error and neural network model training time. Full article
Show Figures

Figure 1

18 pages, 1633 KiB  
Article
Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
by Shengxiang Hu, Bofeng Zhang, Hehe Lv, Furong Chang, Chenyang Zhou, Liangrui Wu and Guobing Zou
Entropy 2022, 24(9), 1213; https://doi.org/10.3390/e24091213 - 30 Aug 2022
Cited by 1 | Viewed by 1724
Abstract
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because [...] Read more.
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because the random walk procedure is based on symmetric node similarity and fixed probability distribution, the sampled vertices’ sequences may lose local community structure information; secondly, because the feature extraction capacity of the shallow neural language model is limited, they can only extract the local structural features of networks; and thirdly, these approaches require specially designed mechanisms for different downstream tasks to integrate vertex attributes of various types. We conducted an in-depth investigation to address the aforementioned issues and propose a novel general NRL framework called dynamic structure and vertex attribute fusion network embedding, which firstly defines an asymmetric similarity and h-hop dynamic random walk strategy to guide the random walk process to preserve the network’s local community structure in walked vertex sequences. Next, we train a self-attention-based sequence prediction model on the walked vertex sequences to simultaneously learn the vertices’ local and global structural features. Finally, we introduce an attributes-driven Laplacian space optimization to converge the process of structural feature extraction and attribute feature extraction. The proposed approach is exhaustively evaluated by means of node visualization and classification on multiple benchmark datasets, and achieves superior results compared to baseline approaches. Full article
Show Figures

Figure 1

13 pages, 3450 KiB  
Article
Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition
by Huawei Tao, Lei Geng, Shuai Shan, Jingchao Mai and Hongliang Fu
Entropy 2022, 24(8), 1025; https://doi.org/10.3390/e24081025 - 26 Jul 2022
Cited by 13 | Viewed by 2133
Abstract
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network [...] Read more.
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods. Full article
Show Figures

Figure 1

21 pages, 6618 KiB  
Article
Optimal Performance and Application for Seagull Optimization Algorithm Using a Hybrid Strategy
by Qingyu Xia, Yuanming Ding, Ran Zhang, Huiting Zhang, Sen Li and Xingda Li
Entropy 2022, 24(7), 973; https://doi.org/10.3390/e24070973 - 14 Jul 2022
Cited by 5 | Viewed by 1925
Abstract
This paper aims to present a novel hybrid algorithm named SPSOA to address problems of low search capability and easy to fall into local optimization of seagull optimization algorithm. Firstly, the Sobol sequence in the low-discrepancy sequences is used to initialize the seagull [...] Read more.
This paper aims to present a novel hybrid algorithm named SPSOA to address problems of low search capability and easy to fall into local optimization of seagull optimization algorithm. Firstly, the Sobol sequence in the low-discrepancy sequences is used to initialize the seagull population to enhance the population’s diversity and ergodicity. Then, inspired by the sigmoid function, a new parameter is designed to strengthen the ability of the algorithm to coordinate early exploration and late development. Finally, the particle swarm optimization learning strategy is introduced into the seagull position updating method to improve the ability of the algorithm to jump out of local optimization. Through the simulation comparison with other algorithms on 12 benchmark test functions from different angles, the experimental results show that SPSOA is superior to other algorithms in stability, convergence accuracy, and speed. In engineering applications, SPSOA is applied to blind source separation of mixed images. The experimental results show that SPSOA can successfully realize the blind source separation of noisy mixed images and achieve higher separation performance than the compared algorithms. Full article
Show Figures

Figure 1

13 pages, 1460 KiB  
Review
Emergent Intelligence in Generalized Pure Quantum Systems
by Miroslav Svítek
Cited by 3 | Viewed by 2308
Abstract
This paper presents the generalized information system theory, which is enlarged into pure quantum systems using wave probability functions. The novelty of this approach is based on analogies with electrical circuits and quantum physics. Information power was chosen as the relevant parameter, which [...] Read more.
This paper presents the generalized information system theory, which is enlarged into pure quantum systems using wave probability functions. The novelty of this approach is based on analogies with electrical circuits and quantum physics. Information power was chosen as the relevant parameter, which guarantees the balance of both components—information flow and information content. Next, the principles of quantum resonance between individual information components, which can lead to emergent behavior, are analyzed. For such a system, adding more and more probabilistic information elements can lead to better convergence of the whole to the resulting trajectory due to phase parameters. The paper also offers an original interpretation of information “source–recipient” or “resource–demand” models, including not yet implemented “unused resources” and “unmet demands”. Finally, possible applications of these principles are shown in several examples from the quantum gyrator to the hypothetical possibility of explaining some properties of the consciousness. Full article
Show Figures

Figure 1

14 pages, 3128 KiB  
Article
Bullet Frangibility Factor Quantification by Using Explicit Dynamic Simulation Method
by Widyastuti Widyastuti, Holly Indi Ramadhian, Mas Irfan Purbawanto Hidayat, Adhy Prihatmiko Wibowo and Hosta Ardhyananta
Viewed by 2772
Abstract
Frangible bullets have a unique property that disintegrates into fragments upon hitting a hard target or obstacle. This peculiar ability to become fragments after impact is called frangibility. In this study, frangibility testing was carried out theoretically via modeling using the explicit dynamics [...] Read more.
Frangible bullets have a unique property that disintegrates into fragments upon hitting a hard target or obstacle. This peculiar ability to become fragments after impact is called frangibility. In this study, frangibility testing was carried out theoretically via modeling using the explicit dynamics method with ANSYS Autodyn solver integrated by ANSYS Workbench software. This paper aims to analyze frangibility through two main factors: material properties and projectile design. The results show the scattering and remaining bullet fragments after impact. According to the modeling results, the frangibility factor values are 9.34 and 10.79, respectively. Based on the frangibility factor, errors based on the frangibility factor by comparing the experimental results and simulations for AMMO 1 and AMMO 2 are 10.5% and 1.09%. Based on simulation results, the AMMO 2 design bullet scattering pattern shows several scattering particles more than the AMMO 1 design, with the furthest distance scattering AMMO 1 and AMMO 2 bullets being 1.01 m and 2658 m. Full article
Show Figures

Figure 1

24 pages, 6955 KiB  
Article
Improved Shear Strength Prediction Model of Steel Fiber Reinforced Concrete Beams by Adopting Gene Expression Programming
by Moiz Tariq, Azam Khan, Asad Ullah, Javad Shayanfar and Momina Niaz
Materials 2022, 15(11), 3758; https://doi.org/10.3390/ma15113758 - 24 May 2022
Cited by 14 | Viewed by 2663
Abstract
In this study, an artificial intelligence tool called gene expression programming (GEP) has been successfully applied to develop an empirical model that can predict the shear strength of steel fiber reinforced concrete beams. The proposed genetic model incorporates all the influencing parameters such [...] Read more.
In this study, an artificial intelligence tool called gene expression programming (GEP) has been successfully applied to develop an empirical model that can predict the shear strength of steel fiber reinforced concrete beams. The proposed genetic model incorporates all the influencing parameters such as the geometric properties of the beam, the concrete compressive strength, the shear span-to-depth ratio, and the mechanical and material properties of steel fiber. Existing empirical models ignore the tensile strength of steel fibers, which exercise a strong influence on the crack propagation of concrete matrix, thereby affecting the beam shear strength. To overcome this limitation, an improved and robust empirical model is proposed herein that incorporates the fiber tensile strength along with the other influencing factors. For this purpose, an extensive experimental database subjected to four-point loading is constructed comprising results of 488 tests drawn from the literature. The data are divided based on different shapes (hooked or straight fiber) and the tensile strength of steel fiber. The empirical model is developed using this experimental database and statistically compared with previously established empirical equations. This comparison indicates that the proposed model shows significant improvement in predicting the shear strength of steel fiber reinforced concrete beams, thus substantiating the important role of fiber tensile strength. Full article
Show Figures

Figure 1

19 pages, 3550 KiB  
Article
A Tailored Pricing Strategy for Different Types of Users in Hybrid Carsharing Systems
by Rongqin Lu, Xiaomei Zhao and Yingqi Wang
Algorithms 2022, 15(5), 172; https://doi.org/10.3390/a15050172 - 20 May 2022
Cited by 3 | Viewed by 2353
Abstract
Considering the characteristics of different types of users in hybrid carsharing systems, in which sharing autonomous vehicles (SAVs) and conventional sharing cars (CSCs) coexist, a tailored pricing strategy (TPS) is proposed to maximize the operator’s profit and minimize all users’ costs. The fleet [...] Read more.
Considering the characteristics of different types of users in hybrid carsharing systems, in which sharing autonomous vehicles (SAVs) and conventional sharing cars (CSCs) coexist, a tailored pricing strategy (TPS) is proposed to maximize the operator’s profit and minimize all users’ costs. The fleet sizes and sizes of SAVs’ stations are also determined simultaneously. A bi-objective nonlinear programming model is established, and a genetic algorithm is applied to solve it. Based on the operational data in Lanzhou, China, carsharing users are clustered into three types. They are loyal users, losing users, and potential users, respectively. Results show the application of the TPS can help the operator increase profit and attract more users. The loyal users are assigned the highest price, while they still contribute the most to the operator’s profit with the highest number of carsharing trips. The losing users and potential users are comparable in terms of the number of trips, while the latter still makes more profit. Full article
Show Figures

Figure 1

15 pages, 1672 KiB  
Article
Predicting Box-Office Markets with Machine Learning Methods
by Dawei Li and Zhi-Ping Liu
Entropy 2022, 24(5), 711; https://doi.org/10.3390/e24050711 - 16 May 2022
Cited by 8 | Viewed by 3612
Abstract
The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of [...] Read more.
The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of eight methods with diverse combinations of economic factors. Specifically, we achieved a prediction performance of the relative root mean squared error of 0.056 in the US and of 0.183 in China for the two case studies of movie markets in time-series forecasting experiments from 2013 to 2016. We concluded that the support-vector-machine-based method using gross domestic product reached the best prediction performance and satisfies the easily available information of economic factors. The computational experiments and comparison studies provided evidence for the effectiveness and advantages of our proposed prediction strategy. In the validation process of the predicted total box-office markets in 2017, the error rates were 0.044 in the US and 0.066 in China. In the consecutive predictions of nationwide box-office markets in 2018 and 2019, the mean relative absolute percentage errors achieved were 0.041 and 0.035 in the US and China, respectively. The precise predictions, both in the training and validation data, demonstrate the efficiency and versatility of our proposed method. Full article
Show Figures

Figure 1

17 pages, 2467 KiB  
Article
PSO Optimized Active Disturbance Rejection Control for Aircraft Anti-Skid Braking System
by Fengrui Xu, Mengqiao Chen, Xuelin Liang and Wensheng Liu
Algorithms 2022, 15(5), 158; https://doi.org/10.3390/a15050158 - 10 May 2022
Cited by 4 | Viewed by 2354
Abstract
A high-quality and secure touchdown run for an aircraft is essential for economic, operational, and strategic reasons. The shortest viable touchdown run without any skidding requires variable braking pressure to manage the friction between the road surface and braking tire at all times. [...] Read more.
A high-quality and secure touchdown run for an aircraft is essential for economic, operational, and strategic reasons. The shortest viable touchdown run without any skidding requires variable braking pressure to manage the friction between the road surface and braking tire at all times. Therefore, the manipulation and regulation of the anti-skid braking system (ABS) should be able to handle steady nonlinearity and undetectable disturbances and to regulate the wheel slip ratio to make sure that the braking system operates securely. This work proposes an active disturbance rejection control technique for the anti-skid braking system. The control law ensures action that is bounded and manageable, and the manipulating algorithm can ensure that the closed-loop machine works around the height factor of the secure area of the friction curve, thereby improving overall braking performance and safety. The stability of the proposed algorithm is proven primarily by means of Lyapunov-based strategies, and its effectiveness is assessed by means of simulations on a semi-physical aircraft brake simulation platform. Full article
Show Figures

Figure 1

23 pages, 5268 KiB  
Article
Evaluation of Various Tree-Based Ensemble Models for Estimating Solar Energy Resource Potential in Different Climatic Zones of China
by Zhigao Zhou, Aiwen Lin, Lijie He and Lunche Wang
Energies 2022, 15(9), 3463; https://doi.org/10.3390/en15093463 - 9 May 2022
Cited by 6 | Viewed by 2085
Abstract
Solar photovoltaic (PV) electricity generation is growing rapidly in China. Accurate estimation of solar energy resource potential (Rs) is crucial for siting, designing, evaluating and optimizing PV systems. Seven types of tree-based ensemble models, including classification and regression trees (CART), [...] Read more.
Solar photovoltaic (PV) electricity generation is growing rapidly in China. Accurate estimation of solar energy resource potential (Rs) is crucial for siting, designing, evaluating and optimizing PV systems. Seven types of tree-based ensemble models, including classification and regression trees (CART), extremely randomized trees (ET), random forest (RF), gradient boosting decision tree (GBDT), extreme gradient boosting (XGBoost), gradient boosting with categorical features support (CatBoost) and light gradient boosting method (LightGBM), as well as the multi-layer perceotron (MLP) and support vector machine (SVM), were applied to estimate Rs using a k-fold cross-validation method. The three newly developed models (CatBoost, LighGBM, XGBoost) and GBDT model generally outperformed the other five models with satisfactory accuracy (R2 ranging from 0.893–0.916, RMSE ranging from 1.943–2.195 MJm−2d−1, and MAE ranging from 1.457–1.646 MJm−2d−1 on average) and provided acceptable model stability (increasing the percentage in testing RMSE over training RMSE from 8.3% to 31.9%) under seven input combinations. In addition, the CatBoost (12.3 s), LightGBM (13.9 s), XGBoost (20.5 s) and GBDT (16.8 s) exhibited satisfactory computational efficiency compared with the MLP (132.1 s) and SVM (256.8 s). Comprehensively considering the model accuracy, stability and computational time, the newly developed tree-based models (CatBoost, LighGBM, XGBoost) and commonly used GBDT model were recommended for modeling Rs in contrasting climates of China and possibly similar climatic zones elsewhere around the world. This study evaluated three newly developed tree-based ensemble models of estimating Rs in various climates of China, from model accuracy, model stability and computational efficiency, which provides a new look at indicators of evaluating machine learning methods. Full article
Show Figures

Figure 1

15 pages, 1974 KiB  
Article
Investigating Multi-Level Semantic Extraction with Squash Capsules for Short Text Classification
by Jing Li, Dezheng Zhang and Aziguli Wulamu
Entropy 2022, 24(5), 590; https://doi.org/10.3390/e24050590 - 23 Apr 2022
Cited by 3 | Viewed by 2561
Abstract
At present, short text classification is a hot topic in the area of natural language processing. Due to the sparseness and irregularity of short text, the task of short text classification still faces great challenges. In this paper, we propose a new classification [...] Read more.
At present, short text classification is a hot topic in the area of natural language processing. Due to the sparseness and irregularity of short text, the task of short text classification still faces great challenges. In this paper, we propose a new classification model from the aspects of short text representation, global feature extraction and local feature extraction. We use convolutional networks to extract shallow features from short text vectorization, and introduce a multi-level semantic extraction framework. It uses BiLSTM as the encoding layer while the attention mechanism and normalization are used as the interaction layer. Finally, we concatenate the convolution feature vector and semantic results of the semantic framework. After several rounds of feature integration, the framework improves the quality of the feature representation. Combined with the capsule network, we obtain high-level local information by dynamic routing and then squash them. In addition, we explore the optimal depth of semantic feature extraction for short text based on a multi-level semantic framework. We utilized four benchmark datasets to demonstrate that our model provides comparable results. The experimental results show that the accuracy of SUBJ, TREC, MR and ProcCons are 93.8%, 91.94%, 82.81% and 98.43%, respectively, which verifies that our model has greatly improves classification accuracy and model robustness. Full article
Show Figures

Figure 1

15 pages, 4449 KiB  
Article
Optimizing Finite-Difference Operator in Seismic Wave Numerical Modeling
by Hui Li, Yuan Fang, Zhiguo Huang, Mengyao Zhang and Qing Wei
Algorithms 2022, 15(4), 132; https://doi.org/10.3390/a15040132 - 18 Apr 2022
Viewed by 2383
Abstract
The finite-difference method is widely used in seismic wave numerical simulation, imaging, and waveform inversion. In the finite-difference method, the finite difference operator is used to replace the differential operator approximately, which can be obtained by truncating the spatial convolution series. The properties [...] Read more.
The finite-difference method is widely used in seismic wave numerical simulation, imaging, and waveform inversion. In the finite-difference method, the finite difference operator is used to replace the differential operator approximately, which can be obtained by truncating the spatial convolution series. The properties of the truncated window function, such as the main and side lobes of the window function’s amplitude response, determine the accuracy of finite-difference, which subsequently affects the seismic imaging and inversion results significantly. Although numerical dispersion is inevitable in this process, it can be suppressed more effectively by using higher precision finite-difference operators. In this paper, we use the krill herd algorithm, in contrast with the standard PSO and CDPSO (a variant of PSO), to optimize the finite-difference operator. Numerical simulation results verify that the krill herd algorithm has good performance in improving the precision of the differential operator. Full article
Show Figures

Figure 1

19 pages, 994 KiB  
Article
Hybrid-Flash Butterfly Optimization Algorithm with Logistic Mapping for Solving the Engineering Constrained Optimization Problems
by Mengjian Zhang, Deguang Wang and Jing Yang
Entropy 2022, 24(4), 525; https://doi.org/10.3390/e24040525 - 8 Apr 2022
Cited by 29 | Viewed by 2760
Abstract
Only the smell perception rule is considered in the butterfly optimization algorithm (BOA), which is prone to falling into a local optimum. Compared with the original BOA, an extra operator, i.e., color perception rule, is incorporated into the proposed hybrid-flash butterfly optimization algorithm [...] Read more.
Only the smell perception rule is considered in the butterfly optimization algorithm (BOA), which is prone to falling into a local optimum. Compared with the original BOA, an extra operator, i.e., color perception rule, is incorporated into the proposed hybrid-flash butterfly optimization algorithm (HFBOA), which makes it more in line with the actual foraging characteristics of butterflies in nature. Besides, updating the strategy of the control parameters by the logistic mapping is used in the HFBOA for enhancing the global optimal ability. The performance of the proposed method was verified by twelve benchmark functions, where the comparison experiment results show that the HFBOA converges quicker and has better stability for numerical optimization problems, which are compared with six state-of-the-art optimization methods. Additionally, the proposed HFBOA is successfully applied to six engineering constrained optimization problems (i.e., tubular column design, tension/compression spring design, cantilever beam design, etc.). The simulation results reveal that the proposed approach demonstrates superior performance in solving complex real-world engineering constrained tasks. Full article
Show Figures

Figure 1

14 pages, 4915 KiB  
Article
Insight into the Exemplary Physical Properties of Zn-Based Fluoroperovskite Compounds XZnF3 (X = Al, Cs, Ga, In) Employing Accurate GGA Approach: A First-Principles Study
by Anwar Habib, Mudasser Husain, Muhammad Sajjad, Nasir Rahman, Rajwali Khan, Mohammad Sohail, Ismat Hassan Ali, Shahid Iqbal, Mohammed Ilyas Khan, Sara A. M. Ebraheem, Ahmed M. El-Sabrout and Hosam O. Elansary
Materials 2022, 15(7), 2669; https://doi.org/10.3390/ma15072669 - 5 Apr 2022
Cited by 34 | Viewed by 2311
Abstract
Using the full-potential linearized augmented plane wave (FP-LAPW) method, dependent on density functional theory, the simple cubic ternary fluoroperovskites XZnF3 (X = Al, Cs, Ga, In) compound properties, including structural, elastic, electronic, and optical, are calculated. To include the effect of exchange [...] Read more.
Using the full-potential linearized augmented plane wave (FP-LAPW) method, dependent on density functional theory, the simple cubic ternary fluoroperovskites XZnF3 (X = Al, Cs, Ga, In) compound properties, including structural, elastic, electronic, and optical, are calculated. To include the effect of exchange and correlation potentials, the generalized gradient approximation is applied for the optimization operation. This is identified, when we are changing the metallic cation specified as “X” when shifting to Al from Cs, the value of the bulk modulus is found to increase, showing the rigidity of a material. Depending upon the value of the bulk modulus, we can say that the compound AlZnF3 is harder and cannot be compressed as easily as compared to the other three compounds, which are having a lower value of the bulk modulus from AlZnF3. It is also found that the understudy compounds are mechanically well balanced and anisotropic. The determined value of the Poisson ratio, Cauchy pressure, and Pugh ratio shows our compounds have a ductile nature. From the computation of the band structure, it is found that the compound CsZnF3 is having an indirect band of 3.434 eV from (M-Γ), while the compounds AlZnF3, GaZnF3, and InZnF3 are found to have indirect band gaps of 2.425 eV, 3.665 eV, and 2.875 eV from (M-X), respectively. The optical properties are investigated for radiation up to 40 eV. The main optical spectra peaks are described as per the measured electronic structure. The above findings provide comprehensive insight into understanding the physical properties of Zn-based fluoroperovskites. Full article
Show Figures

Figure 1

20 pages, 4326 KiB  
Article
Continuous Simulation of the Power Flow in AC–DC Hybrid Microgrids Using Simplified Modelling
by Oswaldo López-Santos, María C. Salas-Castaño and Diego F. Salazar-Dantonio
Cited by 3 | Viewed by 2946
Abstract
This paper reports the development of a model for continuous simulation of the power flow into AC–DC hybrid microgrids operating for different generation–consumption scenarios. The proposed application was assembled using a multiple-input multiple-output model which was built using blocks containing simplified models of [...] Read more.
This paper reports the development of a model for continuous simulation of the power flow into AC–DC hybrid microgrids operating for different generation–consumption scenarios. The proposed application was assembled using a multiple-input multiple-output model which was built using blocks containing simplified models of photovoltaic (PV) modules, wind turbines (WT), battery arrays (energy storage units, ESU), and power loads. The average power was used as the input/output variable of the blocks, allowing flexibility for easy reconfiguration of the microgrid and its control. By defining a generation profile, PV and WT were modeled considering environmental conditions and efficiency profiles of the maximum power point tracking (MPPT) algorithms. ESUs were modeled from intrinsic characteristics of the batteries, considering a constant power charge regime and using the State of Energy (SoE) approach to compute autonomy. To define a consumption profile, DC and AC loads were modeled as a constant real power. As an innovative characteristic, unidirectional and bidirectional power conversion stages were modeled using efficiency profiles, which can be obtained from experiments applied to the real converters. The outputs of the models of generation, consumption, and storage units were integrated as inputs of the mathematical expressions computing the power balance of the buses of the microgrid. The proposed model is suitable to analyze efficiency for different configurations of the same microgrid architecture, and can be extended by integrating additional elements. The model was implemented in LabVIEW software and three examples were developed to test its correct operation. Full article
Show Figures

Figure 1

13 pages, 2796 KiB  
Article
Forecasting Network Interface Flow Using a Broad Learning System Based on the Sparrow Search Algorithm
by Xiaoyu Li, Shaobo Li, Peng Zhou and Guanglin Chen
Entropy 2022, 24(4), 478; https://doi.org/10.3390/e24040478 - 29 Mar 2022
Cited by 10 | Viewed by 2203
Abstract
In this paper, we propose a broad learning system based on the sparrow search algorithm. Firstly, in order to avoid the complicated manual parameter tuning process and obtain the best combination of hyperparameters, the sparrow search algorithm is used to optimize the shrinkage [...] Read more.
In this paper, we propose a broad learning system based on the sparrow search algorithm. Firstly, in order to avoid the complicated manual parameter tuning process and obtain the best combination of hyperparameters, the sparrow search algorithm is used to optimize the shrinkage coefficient (r) and regularization coefficient (λ) in the broad learning system to improve the prediction accuracy of the model. Second, using the broad learning system to build a network interface flow forecasting model. The flow values in the time period [T11,T] are used as the characteristic values of the traffic at the moment T+1. The hyperparameters outputted in the previous step are fed into the network to train the broad learning system network traffic prediction model. Finally, to verify the model performance, this paper trains the prediction model on two public network flow datasets and real traffic data of an enterprise cloud platform switch interface and compares the proposed model with the broad learning system, long short-term memory, and other methods. The experiments show that the prediction accuracy of this method is higher than other methods, and the moving average reaches 97%, 98%, and 99% on each dataset, respectively. Full article
Show Figures

Figure 1

42 pages, 19228 KiB  
Article
Computer Simulations of Injection Process of Elements Used in Electromechanical Devices
by Adam Gnatowski, Agnieszka Kijo-Kleczkowska, Jaroslaw Krzywanski, Przemyslaw Lemanski and Elzbieta Kopciuszewska
Materials 2022, 15(7), 2511; https://doi.org/10.3390/ma15072511 - 29 Mar 2022
Cited by 2 | Viewed by 2374
Abstract
This paper presents the computer simulations of the injection process of elements used in electromechanical devices and an analysis of the impact of the injection molding process parameters on the quality of moldings. The study of the process was performed in Autodesk Simulation [...] Read more.
This paper presents the computer simulations of the injection process of elements used in electromechanical devices and an analysis of the impact of the injection molding process parameters on the quality of moldings. The study of the process was performed in Autodesk Simulation Moldflow Insight 2021. The setting of the injection process of the detail must be based on the material and process technological card data and knowledge of the injection molding machine work. The supervision of production quality in the case of injection moldings is based on the information and requirements received from the customer. The main goal of the analysis is to answer the question: how to properly set up the process of filling the mold cavities in order to meet the quality requirements of the presented molding. In this paper, the simulation was compared with the real process. It is extremely important to optimize the injection, including synchronizing all process parameters. Incorrectly selected values of the parameters may lead to product defects, leading to losses and destruction of raw materials, and unnecessary energy consumption connected with the process. Full article
Show Figures

Figure 1

15 pages, 980 KiB  
Article
A Hybrid Method Using HAVOK Analysis and Machine Learning for Predicting Chaotic Time Series
by Jinhui Yang, Juan Zhao, Junqiang Song, Jianping Wu, Chengwu Zhao and Hongze Leng
Entropy 2022, 24(3), 408; https://doi.org/10.3390/e24030408 - 15 Mar 2022
Cited by 10 | Viewed by 3687
Abstract
The prediction of chaotic time series systems has remained a challenging problem in recent decades. A hybrid method using Hankel Alternative View Of Koopman (HAVOK) analysis and machine learning (HAVOK-ML) is developed to predict chaotic time series. HAVOK-ML simulates the time series by [...] Read more.
The prediction of chaotic time series systems has remained a challenging problem in recent decades. A hybrid method using Hankel Alternative View Of Koopman (HAVOK) analysis and machine learning (HAVOK-ML) is developed to predict chaotic time series. HAVOK-ML simulates the time series by reconstructing a closed linear model so as to achieve the purpose of prediction. It decomposes chaotic dynamics into intermittently forced linear systems by HAVOK analysis and estimates the external intermittently forcing term using machine learning. The prediction performance evaluations confirm that the proposed method has superior forecasting skills compared with existing prediction methods. Full article
Show Figures

Figure 1

15 pages, 2434 KiB  
Article
Ensemble Learning-Based Reactive Power Optimization for Distribution Networks
by Ruijin Zhu, Bo Tang and Wenhai Wei
Energies 2022, 15(6), 1966; https://doi.org/10.3390/en15061966 - 8 Mar 2022
Cited by 1 | Viewed by 1642
Abstract
Reactive power optimization of distribution networks is of great significance to improve power quality and reduce power loss. However, traditional methods for reactive power optimization of distribution networks either consume a lot of calculation time or have limited accuracy. In this paper, a [...] Read more.
Reactive power optimization of distribution networks is of great significance to improve power quality and reduce power loss. However, traditional methods for reactive power optimization of distribution networks either consume a lot of calculation time or have limited accuracy. In this paper, a novel data-driven-based approach is proposed to simultaneously improve the accuracy and reduce calculation time for reactive power optimization using ensemble learning. Specifically, k-fold cross-validation is used to train multiple sub-models, which are merged to obtain high-quality optimization results through the proposed ensemble framework. The simulation results show that the proposed approach outperforms popular baselines, such as light gradient boosting machine, convolutional neural network, case-based reasoning, and multi-layer perceptron. Moreover, the calculation time is much lower than the traditional heuristic methods, such as the genetic algorithm. Full article
Show Figures

Figure 1

Back to TopTop