Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (279)

Search Parameters:
Keywords = piecewise algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7655 KiB  
Article
Multi-Objective Optimal Trajectory Planning for Woodworking Manipulator and Worktable Based on the INSGA-II Algorithm
by Jiaping Yi, Changqing Zhang, Sihan Chen, Qinglong Dai, Hang Yu, Guang Yang and Leyuan Yu
Appl. Sci. 2025, 15(1), 310; https://doi.org/10.3390/app15010310 - 31 Dec 2024
Viewed by 406
Abstract
The manipulator has been widely used in the wood processing industry; the main problem currently faced is optimizing the motion trajectory to enhance the processing efficiency and operational stability of the woodworking manipulator and worktable. A 5-7-5 piecewise polynomial interpolation method is proposed [...] Read more.
The manipulator has been widely used in the wood processing industry; the main problem currently faced is optimizing the motion trajectory to enhance the processing efficiency and operational stability of the woodworking manipulator and worktable. A 5-7-5 piecewise polynomial interpolation method is proposed to construct the spatial trajectories of each joint. An improved non-dominated sorting genetic algorithm (INSGA-II) is proposed to achieve a time–jerk multi-objective trajectory planning that can meet the dual requirements of minimal processing time and reduced motion impact. In order to address the limitations of the standard NSGA-II algorithm, which is prone to local optima and exhibits slow convergence, we propose a good point set method for multi-objective optimization population initialization and a linear ranking selection method to refine the parent selection process within the genetic algorithm. The improved NSGA-II algorithm markedly enhanced both the uniformity of the population distribution and convergence speed. In practical applications, selecting suitable weightings to construct a normalized weight function can identify the optimal solution from the Pareto frontier curve. A high-order continuous and smooth optimal trajectory without abrupt changes can be obtained. The simulation results demonstrated that the 5-7-5 piecewise polynomial interpolation curve effectively constructed a high-order smooth processing trajectory with continuous and smooth velocity, acceleration, and jerk, free from discontinuities. Moreover, the INSGA-II algorithm outperforms the original algorithm in terms of convergence and distribution, enabling the optimal time–jerk multi-objective trajectory planning that adheres to constraint conditions. Optimized by the improved NSGA-II algorithm, the optimal total running time is 4.5400 s, and the optimal jerk is 17.934 m(rad)/s3. This provides a novel approach to solving the inefficiencies and operational instability prevalent in traditional woodworking equipment. Full article
Show Figures

Figure 1

39 pages, 25059 KiB  
Article
Exploratory Study of a Green Function Based Solver for Nonlinear Partial Differential Equations
by Pablo Solano-López, Jorge Saavedra and Raúl Molina
Algorithms 2024, 17(12), 564; https://doi.org/10.3390/a17120564 - 10 Dec 2024
Viewed by 522
Abstract
This work explores the numerical translation of the weak or integral solution of nonlinear partial differential equations into a numerically efficient, time-evolving scheme. Specifically, we focus on partial differential equations separable into a quasilinear term and a nonlinear one, with the former defining [...] Read more.
This work explores the numerical translation of the weak or integral solution of nonlinear partial differential equations into a numerically efficient, time-evolving scheme. Specifically, we focus on partial differential equations separable into a quasilinear term and a nonlinear one, with the former defining the Green function of the problem. Utilizing the Green function under a short-time approximation, it becomes possible to derive the integral solution of the problem by breaking it into three integral terms: the propagation of initial conditions and the contributions of the nonlinear and boundary terms. Accordingly, we follow this division to describe and separately analyze the resulting algorithm. To ensure low interpolation error and accurate numerical Green functions, we adapt a piecewise interpolation collocation method to the integral scheme, optimizing the positioning of grid points near the boundary region. At the same time, we employ a second-order quadrature method in time to efficiently implement the nonlinear terms. Validation of both adapted methodologies is conducted by applying them to problems with known analytical solution, as well as to more challenging, norm-preserving problems such as the Burgers equation and the soliton solution of the nonlinear Schrödinger equation. Finally, the boundary term is derived and validated using a series of test cases that cover the range of possible scenarios for boundary problems within the introduced methodology. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

17 pages, 3083 KiB  
Article
Dynamic Inverse Control of Uncertain Pure Feedback Systems Based on Extended State Observer
by Yuanqing Wang, Wenyao Ma and Guichen Zhang
Symmetry 2024, 16(12), 1632; https://doi.org/10.3390/sym16121632 - 9 Dec 2024
Viewed by 511
Abstract
A novel, precise disturbance rejection dynamic inversion control algorithm has been proposed. In the high-order dynamic surface control system, an innovative approach utilizes a monotonically increasing inverse hyperbolic sine function to construct an extended state observer, which estimates the uncertain functions at each [...] Read more.
A novel, precise disturbance rejection dynamic inversion control algorithm has been proposed. In the high-order dynamic surface control system, an innovative approach utilizes a monotonically increasing inverse hyperbolic sine function to construct an extended state observer, which estimates the uncertain functions at each step. The monotonicity of the inverse hyperbolic sine function simplifies the system stability analysis. Additionally, being a smooth function, it avoids the disturbances caused by piecewise functions at their breakpoints in conventional observer construction, thereby enhancing system stability. The accurate prediction capability of the new observer improves the system’s disturbance rejection performance. To address the inherent differential explosion phenomenon in traditional dynamic inversion control schemes, this paper ingeniously employs a tracking signal observer as a substitute for traditional filters, thus avoiding the differential explosion that may occur with first-order filters. Finally, comparative simulations were conducted to validate the effectiveness of the proposed method. The results show that both the observer and the controller possess high-gain characteristics, and the closed-loop system exhibits a fast convergence rate. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

25 pages, 5288 KiB  
Article
Prediction of Concrete Compressive Strength Based on ISSA-BPNN-AdaBoost
by Ping Li, Zichen Zhang and Jiming Gu
Materials 2024, 17(23), 5727; https://doi.org/10.3390/ma17235727 - 22 Nov 2024
Viewed by 717
Abstract
Strength testing of concrete mainly relies on physical experiments, which are not only time-consuming but also costly. To solve this problem, machine learning has proven to be a promising technological tool in concrete strength prediction. In order to improve the accuracy of the [...] Read more.
Strength testing of concrete mainly relies on physical experiments, which are not only time-consuming but also costly. To solve this problem, machine learning has proven to be a promising technological tool in concrete strength prediction. In order to improve the accuracy of the model in predicting the compressive strength of concrete, this paper chooses to optimize the base learner of the ensemble learning model. The position update formula in the search phase of the sparrow search algorithm (SSA) is improved, and piecewise chaotic mapping and adaptive t-distribution variation are added, which enhances the diversity of the population and improves the algorithm’s global search and convergence abilities. Subsequently, the effectiveness of the improvement strategy was demonstrated by comparing improved sparrow search algorithm (ISSA) with some commonly used intelligent optimization algorithms on 10 test functions. A back propagation neural network (BPNN) optimized with ISSA was used as the base learner, and the adaptive boosting (AdaBoost) algorithm was used to train and integrate multiple base learners, thus establishing an adaptive boosting algorithm based on back propagation neural network improved by the improved sparrow search algorithm (ISSA-BPNN-AdaBoost) concrete compressive strength prediction model. Then comparison experiments were conducted with other ensemble models and single models on two strength prediction datasets. The experimental results show that the ISSA-BPNN-AdaBoost model exhibits excellent results on both datasets and can accurately perform the prediction of concrete compressive strength, demonstrating the superiority of ensemble learning in predicting concrete compressive strength. Full article
Show Figures

Figure 1

14 pages, 299 KiB  
Article
Properties of the SURE Estimates When Using Continuous Thresholding Functions for Wavelet Shrinkage
by Alexey Kudryavtsev and Oleg Shestakov
Mathematics 2024, 12(23), 3646; https://doi.org/10.3390/math12233646 - 21 Nov 2024
Viewed by 548
Abstract
Wavelet analysis algorithms in combination with thresholding procedures are widely used in nonparametric regression problems when estimating a signal function from noisy data. The advantages of these methods lie in their computational efficiency and the ability to adapt to the local features of [...] Read more.
Wavelet analysis algorithms in combination with thresholding procedures are widely used in nonparametric regression problems when estimating a signal function from noisy data. The advantages of these methods lie in their computational efficiency and the ability to adapt to the local features of the estimated function. It is usually assumed that the signal function belongs to some special class. For example, it can be piecewise continuous or piecewise differentiable and have a compact support. These assumptions, as a rule, allow the signal function to be economically represented on some specially selected basis in such a way that the useful signal is concentrated in a relatively small number of large absolute value expansion coefficients. Then, thresholding is performed to remove the noise coefficients. Typically, the noise distribution is assumed to be additive and Gaussian. This model is well studied in the literature, and various types of thresholding and parameter selection strategies adapted for specific applications have been proposed. The risk analysis of thresholding methods is an important practical task, since it makes it possible to assess the quality of both the methods themselves and the equipment used for processing. Most of the studies in this area investigate the asymptotic order of the theoretical risk. In practical situations, the theoretical risk cannot be calculated because it depends explicitly on the unobserved, noise-free signal. However, a statistical risk estimate constructed on the basis of the observed data can also be used to assess the quality of noise reduction methods. In this paper, a model of a signal contaminated with additive Gaussian noise is considered, and the general formulation of the thresholding problem with threshold functions belonging to a special class is discussed. Lower bounds are obtained for the threshold values that minimize the unbiased risk estimate. Conditions are also given under which this risk estimate is asymptotically normal and strongly consistent. The results of these studies can provide the basis for further research in the field of constructing confidence intervals and obtaining estimates of the convergence rate, which, in turn, will make it possible to obtain specific values of errors in signal processing for a wide range of thresholding methods. Full article
19 pages, 6780 KiB  
Article
Sensitivity of Spiking Neural Networks Due to Input Perturbation
by Haoran Zhu, Xiaoqin Zeng, Yang Zou and Jinfeng Zhou
Brain Sci. 2024, 14(11), 1149; https://doi.org/10.3390/brainsci14111149 - 16 Nov 2024
Viewed by 818
Abstract
Background: To investigate the behavior of spiking neural networks (SNNs), the sensitivity of input perturbation serves as an effective metric for assessing the influence on the network output. However, existing methods fall short in evaluating the sensitivity of SNNs featuring biologically plausible leaky [...] Read more.
Background: To investigate the behavior of spiking neural networks (SNNs), the sensitivity of input perturbation serves as an effective metric for assessing the influence on the network output. However, existing methods fall short in evaluating the sensitivity of SNNs featuring biologically plausible leaky integrate-and-fire (LIF) neurons due to the intricate neuronal dynamics during the feedforward process. Methods: This paper first defines the sensitivity of a temporal-coded spiking neuron (SN) as the deviation between the perturbed and unperturbed output under a given input perturbation with respect to overall inputs. Then, the sensitivity algorithm of an entire SNN is derived iteratively from the sensitivity of each individual neuron. Instead of using the actual firing time, the desired firing time is employed to derive a more precise analytical expression of the sensitivity. Moreover, the expectation of the membrane potential difference is utilized to quantify the magnitude of the input deviation. Results/Conclusions: The theoretical results achieved with the proposed algorithm are in reasonable agreement with the simulation results obtained with extensive input data. The sensitivity also varies monotonically with changes in other parameters, except for the number of time steps, providing valuable insights for choosing appropriate values to construct the network. Nevertheless, the sensitivity exhibits a piecewise decreasing tendency with respect to the number of time steps, with the length and starting point of each piece contingent upon the specific parameter values of the neuron. Full article
Show Figures

Figure 1

21 pages, 8080 KiB  
Article
Research on Target Allocation for Hard-Kill Swarm Anti-Unmanned Aerial Vehicle Swarm Systems
by Jianan Zong, Xianzhong Gao, Yue Zhang and Zhongxi Hou
Viewed by 914
Abstract
In response to the saturated attacks by low, slow, and small UAV swarms, there is currently a lack of effective countermeasures. Counter-UAV swarm technology is an important issue that urgently requires breakthroughs. This paper conducts research on a mid–short-range hard-kill counter-swarm scenario where [...] Read more.
In response to the saturated attacks by low, slow, and small UAV swarms, there is currently a lack of effective countermeasures. Counter-UAV swarm technology is an important issue that urgently requires breakthroughs. This paper conducts research on a mid–short-range hard-kill counter-swarm scenario where fewer swarms confront multiple swarms and stronger swarms confront weaker swarms. The requirement is for counter-swarm UAVs to quickly penetrate the swarm at mid–short range and collide with as many incoming UAVs as possible to destroy them. To address the sparse solution space problem, an improved genetic algorithm that integrates multiple strategies is adopted to calculate the spatial density distribution of the incoming swarm. A baseline is identified through gradient descent that maximizes the density integral in a straight-line direction. Based on this baseline, the solution space for single strikes on the swarm is filtered. During the solution process, an elite strategy is introduced to prevent the overall degradation of the population performance. Additionally, the feasibility of the flight trajectory needs to be assessed. A piecewise cubic spline interpolation method is used to optimize the flight trajectory, minimizing the maximum curvature. Ultimately, multiple counter-swarm UAV targets within the swarm and their corresponding trajectories are obtained. Full article
Show Figures

Figure 1

19 pages, 2502 KiB  
Article
Piecewise Neural Network Method for Solving Large Interval Solutions to Initial Value Problems of Ordinary Differential Equations
by Dongpeng Han and Chaolu Temuer
Symmetry 2024, 16(11), 1490; https://doi.org/10.3390/sym16111490 - 7 Nov 2024
Viewed by 906
Abstract
Traditional numerical methods often provide local solutions for initial value problems of differential equations, even though these problems may have solutions over larger intervals. Current neural network algorithms and deep learning methods also struggle to ensure solutions across these broader intervals. This paper [...] Read more.
Traditional numerical methods often provide local solutions for initial value problems of differential equations, even though these problems may have solutions over larger intervals. Current neural network algorithms and deep learning methods also struggle to ensure solutions across these broader intervals. This paper introduces a novel approach employing piecewise neural networks to address this issue. The method involves dividing the solution interval into smaller segments and utilizing neural networks with a uniform structure to solve sub-problems within each segment. These solutions are then combined to form a piecewise expression representing the overall solution. The approach guarantees continuous differentiability of the obtained solution over the entire interval, except for finite end points of those sub-intervals.To enhance accuracy, parameter transfer and multiple rounds of pre-training are employed. Importantly, this method maintains a consistent network size and training data scale across sub-domains, unlike existing neural network algorithms. Numerical experiments validate the efficiency of the proposed algorithm. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 9452 KiB  
Article
GLMI: An Efficient Spatiotemporal Index Leveraging Geohash and Piecewise Linear Models for Optimized Query Performance
by Kun Chen, Gang Liu, Genshen Chen, Zhengping Weng and Qiyu Chen
Algorithms 2024, 17(11), 474; https://doi.org/10.3390/a17110474 - 22 Oct 2024
Viewed by 733
Abstract
Spatiotemporal big data contain information in multiple dimensions such as space and time. Spatiotemporal data have the characteristics of large volume, intricate spatiotemporal relationship, and uneven spatiotemporal distribution. Index structure is one of the most important technologies used to improve system data analysis [...] Read more.
Spatiotemporal big data contain information in multiple dimensions such as space and time. Spatiotemporal data have the characteristics of large volume, intricate spatiotemporal relationship, and uneven spatiotemporal distribution. Index structure is one of the most important technologies used to improve system data analysis and workload. However, it is difficult to dynamically adjust with data density, resulting in increased maintenance costs and retrieval complexity. At the same time, maintaining the proximity of spatiotemporal data in spatial or temporal dimensions is crucial for efficient spatiotemporal analysis. To address these challenges, this paper proposes a learned index method, GLMI (Geohash and piecewise linear model-based index for spatiotemporal data). GLMI uses dynamic space partitioning based on the Hilbert curve to reduce the impact of data skew on index performance. In the time dimension, a piecewise linear model was constructed using the ShrinkingCone algorithm, and a buffer was designed to support the fast writing of spatiotemporal data. Compared with the current mainstream traditional high-dimensional indexes and the ZM index, GLMI has a smaller space consumption and shorter construction time compared to high-dimensional learned indexes on real traffic itinerary and trajectory record datasets. Meanwhile, GLMI also has an advantage in query efficiency. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

16 pages, 6083 KiB  
Article
Thermal Fault-Tolerant Asymmetric Dual-Winding Motors in Integrated Electric Braking System for Autonomous Vehicles
by Kyu-Yun Hwang, Seon-Yeol Oh, Eun-Kyung Park, Baik-Kee Song and Sung-Il Kim
Cited by 1 | Viewed by 737
Abstract
A conventional dual-winding (DW) motor has two internal windings consisting of a master part and a slave part, each connected to a different electronic control unit (ECU) to realize a redundant system. However, existing DW motors have a problem related to heat generation [...] Read more.
A conventional dual-winding (DW) motor has two internal windings consisting of a master part and a slave part, each connected to a different electronic control unit (ECU) to realize a redundant system. However, existing DW motors have a problem related to heat generation in both the healthy mode and the faulty mode of the motor operation. In the healthy mode, unexpected overloads can cause both windings to burn out simultaneously due to equal heat distribution. If the current sensor fails to measure correctly, the motor may exceed the designed current density of 4.7 [Arms/mm2] under air-cooling conditions, further increasing burnout risk. External factors such as excessive load cycles or extreme heat conditions can further exacerbate this issue. In the faulty mode, the motor requires double the current to generate maximum torque, leading to rapid temperature increases and a high risk of overheating. To address these challenges, this paper proposes the design of a thermal fault-tolerant asymmetric dual-winding (ADW) motor, which improves heat management in both healthy and faulty modes for autonomous vehicles. A lumped-parameter thermal network (LPTN) with a piecewise stator-housing model (PSMs) was employed to evaluate the coil temperature during faulty operation. An optimal design approach, incorporating kriging modeling, Design of Experiments (DOE), and a genetic algorithm (GA), was also utilized. The results confirm that the proposed ADW motor design effectively reduces the risk of simultaneous burnout in the healthy mode and overheating in the faulty mode, offering a robust solution for autonomous vehicle applications. Full article
(This article belongs to the Section Electrical Machines and Drives)
Show Figures

Figure 1

16 pages, 14213 KiB  
Article
Bridging the Terrestrial Water Storage Anomalies between the GRACE/GRACE-FO Gap Using BEAST + GMDH Algorithm
by Nijia Qian, Jingxiang Gao, Zengke Li, Zhaojin Yan, Yong Feng, Zhengwen Yan and Liu Yang
Remote Sens. 2024, 16(19), 3693; https://doi.org/10.3390/rs16193693 - 3 Oct 2024
Viewed by 977
Abstract
Regarding the terrestrial water storage anomaly (TWSA) gap between the Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-on (-FO) gravity satellite missions, a BEAST (Bayesian estimator of abrupt change, seasonal change and trend)+GMDH (group method of data handling) gap-filling scheme driven by [...] Read more.
Regarding the terrestrial water storage anomaly (TWSA) gap between the Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-on (-FO) gravity satellite missions, a BEAST (Bayesian estimator of abrupt change, seasonal change and trend)+GMDH (group method of data handling) gap-filling scheme driven by hydrological and meteorological data is proposed. Considering these driving data usually cannot fully capture the trend changes of the TWSA time series, we propose first to use the BEAST algorithm to perform piecewise linear detrending for the TWSA series and then fill the gap of the detrended series using the GMDH algorithm. The complete gap-filling TWSAs can be readily obtained after adding back the previously removed piecewise trend. By comparing the simulated gap filled by BEAST + GMDH using Multiple Linear Regression and Singular Spectrum Analysis with reference values, the results show that the BEAST + GMDH scheme is superior to the latter two in terms of the correlation coefficient, Nash-efficiency coefficient, and root-mean-square error. The real GRACE/GFO gap filled by BEAST + GMDH is consistent with those from hydrological models, Swarm TWSAs, and other literature regarding spatial distribution patterns. The correlation coefficients there between are, respectively, above 0.90, 0.80, and 0.90 in most of the global river basins. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

18 pages, 486 KiB  
Article
Computation of the Survival Probability of Brownian Motion with Drift Subject to an Intermittent Step Barrier
by Tristan Guillaume
AppliedMath 2024, 4(3), 1080-1097; https://doi.org/10.3390/appliedmath4030058 - 2 Sep 2024
Viewed by 734
Abstract
This article provides an exact formula for the survival probability of Brownian motion with drift when the absorbing boundary is defined as an intermittent step barrier, i.e., an alternate sequence of time intervals when the boundary is piecewise constant, and time intervals without [...] Read more.
This article provides an exact formula for the survival probability of Brownian motion with drift when the absorbing boundary is defined as an intermittent step barrier, i.e., an alternate sequence of time intervals when the boundary is piecewise constant, and time intervals without any defined boundary. Numerical implementation is dealt with by a simple and robust Monte Carlo integration algorithm directly derived from the formula, which compares favorably with conditional Monte Carlo simulation. Exact analytical benchmarks are also provided to assess the accuracy of the numerical implementation. Full article
Show Figures

Figure 1

20 pages, 11882 KiB  
Article
Estimating the Cowper–Symonds Parameters for High-Strength Steel Using DIC Combined with Integral Measures of Deviation
by Andrej Škrlec, Branislav Panić, Marko Nagode and Jernej Klemenc
Metals 2024, 14(9), 992; https://doi.org/10.3390/met14090992 - 31 Aug 2024
Cited by 1 | Viewed by 1183
Abstract
Cowper–Symonds parameters were estimated for the complex-phase high-strength steel with a commercial name of SZBS800. The parameter estimation was based on a series of conventional tensile tests and unconventional high-strain rate experiments. The parameters were estimated using a reverse engineering approach. LS-Dyna was [...] Read more.
Cowper–Symonds parameters were estimated for the complex-phase high-strength steel with a commercial name of SZBS800. The parameter estimation was based on a series of conventional tensile tests and unconventional high-strain rate experiments. The parameters were estimated using a reverse engineering approach. LS-Dyna was used for numerical simulations, and the material’s response was modelled using a piece-wise linear plasticity model with a visco-plastic formulation of the Cowper–Symonds material model. A multi-criteria cost function was defined and applied to obtain a response function for the parameters p and C. The cost function was modelled with a response surface, and the optimal parameters were estimated using a real-valued genetic algorithm. The main novelty and innovation of this article is the definition of a cost function that measures a deviation between the deformed geometry of the flat plate-like specimens and the results of the numerical simulations. The results are compared to the relevant literature. A critical evaluation of our results and references is another novelty of this article. Full article
Show Figures

Figure 1

17 pages, 4895 KiB  
Article
Leveraging Prosumer Flexibility to Mitigate Grid Congestion in Future Power Distribution Grids
by Domenico Tomaselli, Dieter Most, Enkel Sinani, Paul Stursberg, Hans Joerg Heger and Stefan Niessen
Energies 2024, 17(17), 4217; https://doi.org/10.3390/en17174217 - 23 Aug 2024
Viewed by 1270
Abstract
The growing adoption of behind-the-meter (BTM) photovoltaic (PV) systems, electric vehicle (EV) home chargers, and heat pumps (HPs) is causing increased grid congestion issues, particularly in power distribution grids. Leveraging BTM prosumer flexibility offers a cost-effective and readily available solution to address these [...] Read more.
The growing adoption of behind-the-meter (BTM) photovoltaic (PV) systems, electric vehicle (EV) home chargers, and heat pumps (HPs) is causing increased grid congestion issues, particularly in power distribution grids. Leveraging BTM prosumer flexibility offers a cost-effective and readily available solution to address these issues without resorting to expensive and time-consuming infrastructure upgrades. This work evaluated the effectiveness of this solution by introducing a novel modeling framework that combines a rolling horizon (RH) optimal power flow (OPF) algorithm with a customized piecewise linear cost function. This framework allows for the individual control of flexible BTM assets through various control measures, while modeling the power flow (PF) and accounting for grid constraints. We demonstrated the practical utility of the proposed framework in an exemplary residential region in Schutterwald, Germany. To this end, we constructed a PF-ready grid model for the region, geographically allocated a future BTM asset mix, and generated tailored load and generation profiles for each household. We found that BTM storage systems optimized for self-consumption can fully resolve feed-in violations at HV/MV stations but only mitigate 35% of the future load violations. Implementing additional control measures is key for addressing the remaining load violations. While curative measures, e.g., temporarily limiting EV charging or HP usage, have minimal impacts, proactive measures that control both the charging and discharging of BTM storage systems can effectively address the remaining load violations, even for grids that are already operating at or near full capacity. Full article
(This article belongs to the Section F3: Power Electronics)
Show Figures

Figure 1

16 pages, 1034 KiB  
Article
Efficient Sleep Stage Identification Using Piecewise Linear EEG Signal Reduction: A Novel Algorithm for Sleep Disorder Diagnosis
by Yash Paul, Rajesh Singh, Surbhi Sharma, Saurabh Singh and In-Ho Ra
Sensors 2024, 24(16), 5265; https://doi.org/10.3390/s24165265 - 14 Aug 2024
Viewed by 1473
Abstract
Sleep is a vital physiological process for human health, and accurately detecting various sleep states is crucial for diagnosing sleep disorders. This study presents a novel algorithm for identifying sleep stages using EEG signals, which is more efficient and accurate than the state-of-the-art [...] Read more.
Sleep is a vital physiological process for human health, and accurately detecting various sleep states is crucial for diagnosing sleep disorders. This study presents a novel algorithm for identifying sleep stages using EEG signals, which is more efficient and accurate than the state-of-the-art methods. The key innovation lies in employing a piecewise linear data reduction technique called the Halfwave method in the time domain. This method simplifies EEG signals into a piecewise linear form with reduced complexity while preserving sleep stage characteristics. Then, a features vector with six statistical features is built using parameters obtained from the reduced piecewise linear function. We used the MIT-BIH Polysomnographic Database to test our proposed method, which includes more than 80 h of long data from different biomedical signals with six main sleep classes. We used different classifiers and found that the K-Nearest Neighbor classifier performs better in our proposed method. According to experimental findings, the average sensitivity, specificity, and accuracy of the proposed algorithm on the Polysomnographic Database considering eight records is estimated as 94.82%, 96.65%, and 95.73%, respectively. Furthermore, the algorithm shows promise in its computational efficiency, making it suitable for real-time applications such as sleep monitoring devices. Its robust performance across various sleep classes suggests its potential for widespread clinical adoption, making significant advances in the knowledge, detection, and management of sleep problems. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Back to TopTop