Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = squared Hellinger distance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 1884 KiB  
Article
SIMECK-T: An Ultra-Lightweight Encryption Scheme for Resource-Constrained Devices
by Alin-Adrian Anton, Petra Csereoka, Eugenia-Ana Capota and Răzvan-Dorel Cioargă
Appl. Sci. 2025, 15(3), 1279; https://doi.org/10.3390/app15031279 - 26 Jan 2025
Viewed by 442
Abstract
The Internet of Things produces vast amounts of data that require specialized algorithms in order to secure them. Lightweight cryptography requires ciphers designed to work on resource-constrained devices like sensors and smart things. A new encryption scheme is introduced based on a blend [...] Read more.
The Internet of Things produces vast amounts of data that require specialized algorithms in order to secure them. Lightweight cryptography requires ciphers designed to work on resource-constrained devices like sensors and smart things. A new encryption scheme is introduced based on a blend of the best-performing algorithms, SIMECK and TEA. A selection of software-oriented Addition–Rotation–XOR (ARX) block ciphers are augmented with a dynamic substitution security layer. The performance is compared against other lightweight approaches. The US National Institute of Standards and Technology (NIST) SP800-22 Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications and the German AIS.31 of the Federal Office for Information Security (BSI) are used to validate the output of the proposed encryption scheme. The law of iterated logarithm (LIL) for randomness is verified in all three forms. The total variance (TV), the Hellinger Distance (HD), and the root-mean-square deviation (RMSD) show values smaller than the required limit for 10.000 sequences of ciphertext. The performance evaluation is analyzed on a Raspberry PICO 2040. Several security metrics are compared against other ciphers, like χ2 and encryption quality (EQ). The results show that SIMECK-T is a powerful and fast, software-oriented, lightweight cryptography solution. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 1347 KiB  
Article
Semi-Empirical Approach to Evaluating Model Fit for Sea Clutter Returns: Focusing on Future Measurements in the Adriatic Sea
by Bojan Vondra
Entropy 2024, 26(12), 1069; https://doi.org/10.3390/e26121069 - 9 Dec 2024
Viewed by 512
Abstract
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing [...] Read more.
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing such as histogram binning. The proposed method converges almost surely, with the proof based on the use of exponentially distributed waiting times. An example demonstrates convergence of the KL divergence and SH distance to their true values when utilising the Generalised Pareto (GP) distribution as empirical data and the K distribution as the model. Another example illustrates the goodness of fit of these (GP and K-distribution) models to real sea clutter data from the widely used Intelligent PIxel processing X-band (IPIX) measurements. The proposed method can be applied to assess the goodness of fit of various models (not limited to GP or K distribution) to clutter measurement data such as those from the Adriatic Sea. Distinctive features of this small and immature sea, like the presence of over 1300 islands that affect local wind and wave patterns, are likely to result in an amplitude distribution of sea clutter returns that differs from predictions of models designed for oceans or open seas. However, to the author’s knowledge, no data on this specific topic are currently available in the open literature, and such measurements have yet to be conducted. Full article
Show Figures

Figure 1

13 pages, 419 KiB  
Article
Robust Estimation of the Tail Index of a Single Parameter Pareto Distribution from Grouped Data
by Chudamani Poudyal
Viewed by 1633
Abstract
Numerous robust estimators exist as alternatives to the maximum likelihood estimator (MLE) when a completely observed ground-up loss severity sample dataset is available. However, the options for robust alternatives to a MLE become significantly limited when dealing with grouped loss severity data, with [...] Read more.
Numerous robust estimators exist as alternatives to the maximum likelihood estimator (MLE) when a completely observed ground-up loss severity sample dataset is available. However, the options for robust alternatives to a MLE become significantly limited when dealing with grouped loss severity data, with only a handful of methods, like least squares, minimum Hellinger distance, and optimal bounded influence function, available. This paper introduces a novel robust estimation technique, the Method of Truncated Moments (MTuM), pecifically designed to estimate the tail index of a Pareto distribution from grouped data. Inferential justification of the MTuM is established by employing the central limit theorem and validating it through a comprehensive simulation study. Full article
(This article belongs to the Special Issue Advancements in Actuarial Mathematics and Risk Theory)
Show Figures

Figure 1

17 pages, 4297 KiB  
Article
Relative Entropy Application to Study the Elastoplastic Behavior of S235JR Structural Steel
by Marcin Kamiński and Michał Strąkowski
Materials 2024, 17(3), 727; https://doi.org/10.3390/ma17030727 - 3 Feb 2024
Viewed by 859
Abstract
The main issue in this work is to study the limit functions necessary for the reliability assessment of structural steel with the use of the relative entropy apparatus. This will be done using a few different mathematical theories relevant to this relative entropy, [...] Read more.
The main issue in this work is to study the limit functions necessary for the reliability assessment of structural steel with the use of the relative entropy apparatus. This will be done using a few different mathematical theories relevant to this relative entropy, namely those proposed by Bhattacharyya, Kullback–Leibler, Jeffreys, and Hellinger. Probabilistic analysis in the presence of uncertainty in material characteristics will be delivered using three different numerical strategies—Monte Carlo simulation, the stochastic perturbation method, as well as the semi-analytical approach. All of these methods are based on the weighted least squares method approximations of the structural response functions versus the given uncertainty source, and they allow efficient determination of the first two probabilistic moments of the structural responses including stresses, displacements, and strains. The entire computational implementation will be delivered using the finite element method system ABAQUS and computer algebra program MAPLE, where relative entropies, as well as polynomial response functions, will be determined. This study demonstrates that the relative entropies may be efficiently used in reliability assessment close to the widely engaged first-order reliability method (FORM). The relative entropy concept enables us to study the probabilistic distance of any two distributions, so that structural resistance and extreme effort in elastoplastic behavior need not be restricted to Gaussian distributions. Full article
Show Figures

Figure 1

24 pages, 1216 KiB  
Article
Empirical Squared Hellinger Distance Estimator and Generalizations to a Family of α-Divergence Estimators
by Rui Ding and Andrew Mullhaupt
Entropy 2023, 25(4), 612; https://doi.org/10.3390/e25040612 - 4 Apr 2023
Cited by 4 | Viewed by 3281
Abstract
We present an empirical estimator for the squared Hellinger distance between two continuous distributions, which almost surely converges. We show that the divergence estimation problem can be solved directly using the empirical CDF and does not need the intermediate step of estimating the [...] Read more.
We present an empirical estimator for the squared Hellinger distance between two continuous distributions, which almost surely converges. We show that the divergence estimation problem can be solved directly using the empirical CDF and does not need the intermediate step of estimating the densities. We illustrate the proposed estimator on several one-dimensional probability distributions. Finally, we extend the estimator to a family of estimators for the family of α-divergences, which almost surely converge as well, and discuss the uniqueness of this result. We demonstrate applications of the proposed Hellinger affinity estimators to approximately bounding the Neyman–Pearson regions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

21 pages, 6903 KiB  
Article
Analysis of Stochastic Distances and Wishart Mixture Models Applied on PolSAR Images
by Naiallen Carolyne Rodrigues Lima Carvalho, Leonardo Sant’Anna Bins and Sidnei João Siqueira Sant’Anna
Remote Sens. 2019, 11(24), 2994; https://doi.org/10.3390/rs11242994 - 12 Dec 2019
Cited by 7 | Viewed by 3426
Abstract
This paper address unsupervised classification strategies applied to Polarimetric Synthetic Aperture Radar (PolSAR) images. We analyze the performance of complex Wishart distribution, which is a widely used model for multi-look PolSAR images, and the robustness of five stochastic distances (Bhattacharyya, Kullback-Leibler, Rényi, Hellinger [...] Read more.
This paper address unsupervised classification strategies applied to Polarimetric Synthetic Aperture Radar (PolSAR) images. We analyze the performance of complex Wishart distribution, which is a widely used model for multi-look PolSAR images, and the robustness of five stochastic distances (Bhattacharyya, Kullback-Leibler, Rényi, Hellinger and Chi-square) between Wishart distributions. Two unsupervised classification strategies were chosen: the Stochastic Clustering (SC) algorithm, which is based on the K-means algorithm but uses stochastic distance as the similarity metric, and the Expectation-Maximization (EM) algorithm for Wishart Mixture Model. With the aim of assessing the performance of all algorithms presented here, we performed a Monte Carlo simulation over a set of simulated PolSAR images. A second experiment was conducted using the study area of Tapajós National Forest and the surrounding area, in Brazilian Amazon Forest. The PolSAR images were obtained by the satellite PALSAR. The results, in both experiments, suggest that the EM algorithm and the SC with Hellinger and the SC with Bhattacharyya distance provide a better classification performance. We also analyze the initialization problem for SC and EM algorithms, and we demonstrate how the initial centroid choice influences the final classification result. Full article
Show Figures

Graphical abstract

13 pages, 3019 KiB  
Article
A Homogeneity Test for Comparing Gridded-Spatial-Point Patterns of Human Caused Fires
by M. Virtudes Alba-Fernández and Francisco Javier Ariza-López
Forests 2018, 9(8), 454; https://doi.org/10.3390/f9080454 - 27 Jul 2018
Cited by 3 | Viewed by 2878
Abstract
The statistical evaluation of the spatial similarity of human caused fire patterns is an important issue for wildland fire analysis. This paper proposes a method based on observed data and on a statistical tool (homogeneity test) that is based on non-explicit spatial distribution [...] Read more.
The statistical evaluation of the spatial similarity of human caused fire patterns is an important issue for wildland fire analysis. This paper proposes a method based on observed data and on a statistical tool (homogeneity test) that is based on non-explicit spatial distribution hypothesis for the human caused fire events. If a tessellation coming from a space filling curve is superimposed on the spatial point patterns, and a linearization mechanism applied, the statistical problem of testing the similarity between the spatial point patterns is equivalent to the one of testing the homogeneity between the two multinomial distributions obtained by modeling the proportions of cases on each cell of the tessellation. This way of comparing spatial point patterns is free of any hypothesis on any spatial point process. Because data are spatially over-dispersed, the existence of many cells of the grid without any count is a problem for classical statistical homogeneity tests. Our work overcomes this problem by applying specific test statistics based on the square Hellinger distance. Simulations and actual data are used in order to tune the process and to demonstrate the capabilities of the proposal. Results indicate that a new and robust method for comparing spatial point patterns of human caused fires is available. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

10 pages, 761 KiB  
Article
Analysis of Thematic Similarity Using Confusion Matrices
by José L. García-Balboa, María V. Alba-Fernández, Francisco J. Ariza-López and José Rodríguez-Avi
ISPRS Int. J. Geo-Inf. 2018, 7(6), 233; https://doi.org/10.3390/ijgi7060233 - 20 Jun 2018
Cited by 19 | Viewed by 4802
Abstract
The confusion matrix is the standard way to report on the thematic accuracy of geographic data (spatial databases, topographic maps, thematic maps, classified images, remote sensing products, etc.). Two widely adopted indices for the assessment of thematic quality are derived from the confusion [...] Read more.
The confusion matrix is the standard way to report on the thematic accuracy of geographic data (spatial databases, topographic maps, thematic maps, classified images, remote sensing products, etc.). Two widely adopted indices for the assessment of thematic quality are derived from the confusion matrix. They are overall accuracy (OA) and the Kappa coefficient (ĸ), which have received some criticism from some authors. Both can be used to test the similarity of two independent classifications by means of a simple statistical hypothesis test, which is the usual practice. Nevertheless, this is not recommended, because different combinations of cell values in the matrix can obtain the same value of OA or ĸ, due to the aggregation of data needed to compute these indices. Thus, not rejecting a test for equality between two index values does not necessarily mean that the two matrices are similar. Therefore, we present a new statistical tool to evaluate the similarity between two confusion matrices. It takes into account that the number of sample units correctly and incorrectly classified can be modeled by means of a multinomial distribution. Thus, it uses the individual cell values in the matrices and not aggregated information, such as the OA or ĸ values. For this purpose, it is considered a test function based on the discrete squared Hellinger distance, which is a measure of similarity between probability distributions. Given that the asymptotic approximation of the null distribution of the test statistic is rather poor for small and moderate sample sizes, we used a bootstrap estimator. To explore how the p-value evolves, we applied the proposed method over several predefined matrices which are perturbed in a specified range. Finally, a complete numerical example of the comparison of two matrices is presented. Full article
Show Figures

Graphical abstract

1318 KiB  
Article
Statistical Distances and Their Applications to Biophysical Parameter Estimation: Information Measures, M-Estimates, and Minimum Contrast Methods
by Ganna Leonenko, Sietse O. Los and Peter R. J. North
Remote Sens. 2013, 5(3), 1355-1388; https://doi.org/10.3390/rs5031355 - 14 Mar 2013
Cited by 32 | Viewed by 7472
Abstract
Radiative transfer models predicting the bidirectional reflectance factor (BRF) of leaf canopies are powerful tools that relate biophysical parameters such as leaf area index (LAI), fractional vegetation cover fV and the fraction of photosynthetically active radiation absorbed by the green parts of the [...] Read more.
Radiative transfer models predicting the bidirectional reflectance factor (BRF) of leaf canopies are powerful tools that relate biophysical parameters such as leaf area index (LAI), fractional vegetation cover fV and the fraction of photosynthetically active radiation absorbed by the green parts of the vegetation canopy (fAPAR) to remotely sensed reflectance data. One of the most successful approaches to biophysical parameter estimation is the inversion of detailed radiative transfer models through the construction of Look-Up Tables (LUTs). The solution of the inverse problem requires additional information on canopy structure, soil background and leaf properties, and the relationships between these parameters and the measured reflectance data are often nonlinear. The commonly used approach for optimization of a solution is based on minimization of the least squares estimate between model and observations (referred to as cost function or distance; here we will also use the terms “statistical distance” or “divergence” or “metric”, which are common in the statistical literature). This paper investigates how least-squares minimization and alternative distances affect the solution to the inverse problem. The paper provides a comprehensive list of different cost functions from the statistical literature, which can be divided into three major classes: information measures, M-estimates and minimum contrast methods. We found that, for the conditions investigated, Least Square Estimation (LSE) is not an optimal statistical distance for the estimation of biophysical parameters. Our results indicate that other statistical distances, such as the two power measures, Hellinger, Pearson chi-squared measure, Arimoto and Koenker–Basset distances result in better estimates of biophysical parameters than LSE; in some cases the parameter estimation was improved by 15%. Full article
Show Figures

Back to TopTop