Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Next Issue
Volume 26, August
Previous Issue
Volume 26, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 7 (July 2024) – 83 articles

Cover Story (view full-size image): The neuroimaging field is, in many ways, still metabolizing the dynamic imaging revolution that began in 2010. Among the discoveries made in the intervening fourteen years, it has been found that brain function appears to operate in a remarkably low dimensional space. This makes the potential problem of accessing and quantifying brain dynamics both simpler and more difficult: simpler because researchers need fewer variables, and more difficult because mapping the dimensions of or succinctly measuring signal trajectories in this space remain unsolved problems. We propose a means to map these dimensions via ICA and to quantify the resultant signal trajectory repertoires. This framework identifies meaningful alterations between clinical groups and links functional alterations to some cognitive scores. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 253 KiB  
Article
Landauer Principle and the Second Law in a Relativistic Communication Scenario
by Yuri J. Alvim and Lucas C. Céleri
Entropy 2024, 26(7), 613; https://doi.org/10.3390/e26070613 - 22 Jul 2024
Viewed by 754
Abstract
The problem of formulating thermodynamics in a relativistic scenario remains unresolved, although many proposals exist in the literature. The challenge arises due to the intrinsic dynamic structure of spacetime as established by the general theory of relativity. With the discovery of the physical [...] Read more.
The problem of formulating thermodynamics in a relativistic scenario remains unresolved, although many proposals exist in the literature. The challenge arises due to the intrinsic dynamic structure of spacetime as established by the general theory of relativity. With the discovery of the physical nature of information, which underpins Landauer’s principle, we believe that information theory should play a role in understanding this problem. In this work, we contribute to this endeavour by considering a relativistic communication task between two partners, Alice and Bob, in a general Lorentzian spacetime. We then assume that the receiver, Bob, reversibly operates a local heat engine powered by information, and seek to determine the maximum amount of work he can extract from this device. As Bob cannot extract work for free, by applying both Landauer’s principle and the second law of thermodynamics, we establish a bound on the energy Bob must spend to acquire the information in the first place. This bound is a function of the spacetime metric and the properties of the communication channel. Full article
85 pages, 47129 KiB  
Review
One-Dimensional Relativistic Self-Gravitating Systems
by Robert B. Mann
Entropy 2024, 26(7), 612; https://doi.org/10.3390/e26070612 - 21 Jul 2024
Viewed by 615
Abstract
One of the oldest problems in physics is that of calculating the motion of N particles under a specified mutual force: the N-body problem. Much is known about this problem if the specified force is non-relativistic gravity, and considerable progress has been [...] Read more.
One of the oldest problems in physics is that of calculating the motion of N particles under a specified mutual force: the N-body problem. Much is known about this problem if the specified force is non-relativistic gravity, and considerable progress has been made by considering the problem in one spatial dimension. Here, I review what is known about the relativistic gravitational N-body problem. Reduction to one spatial dimension has the feature of the absence of gravitational radiation, thereby allowing for a clear comparison between the physics of one-dimensional relativistic and non-relativistic self-gravitating systems. After describing how to obtain a relativistic theory of gravity coupled to N point particles, I discuss in turn the two-body, three-body, four-body, and N-body problems. Quite general exact solutions can be obtained for the two-body problem, unlike the situation in general relativity in three spatial dimensions for which only highly specified solutions exist. The three-body problem exhibits mild forms of chaos, and provides one of the first theoretical settings in which relativistic chaos can be studied. For N4, other interesting features emerge. Relativistic self-gravitating systems have a number of interesting problems awaiting further investigation, providing us with a new frontier for exploring relativistic many-body systems. Full article
(This article belongs to the Special Issue Statistical Mechanics of Self-Gravitating Systems)
Show Figures

Figure 1

17 pages, 345 KiB  
Article
How to Partition a Quantum Observable
by Caleb Merrick Webb and Charles Allen Stafford
Entropy 2024, 26(7), 611; https://doi.org/10.3390/e26070611 - 20 Jul 2024
Cited by 1 | Viewed by 638
Abstract
We present a partition of quantum observables in an open quantum system that is inherited from the division of the underlying Hilbert space or configuration space. It is shown that this partition leads to the definition of an inhomogeneous continuity equation for generic, [...] Read more.
We present a partition of quantum observables in an open quantum system that is inherited from the division of the underlying Hilbert space or configuration space. It is shown that this partition leads to the definition of an inhomogeneous continuity equation for generic, non-local observables. This formalism is employed to describe the local evolution of the von Neumann entropy of a system of independent quantum particles out of equilibrium. Crucially, we find that all local fluctuations in the entropy are governed by an entropy current operator, implying that the production of entanglement entropy is not measured by this partitioned entropy. For systems linearly perturbed from equilibrium, it is shown that this entropy current is equivalent to a heat current, provided that the system-reservoir coupling is partitioned symmetrically. Finally, we show that any other partition of the coupling leads directly to a divergence of the von Neumann entropy. Thus, we conclude that Hilbert-space partitioning is the only partition of the von Neumann entropy that is consistent with the laws of thermodynamics. Full article
Show Figures

Figure 1

18 pages, 516 KiB  
Article
Likelihood Inference for Factor Copula Models with Asymmetric Tail Dependence
by Harry Joe and Xiaoting Li
Entropy 2024, 26(7), 610; https://doi.org/10.3390/e26070610 - 19 Jul 2024
Viewed by 624
Abstract
For multivariate non-Gaussian involving copulas, likelihood inference is dominated by the data in the middle, and fitted models might not be very good for joint tail inference, such as assessing the strength of tail dependence. When preliminary data and likelihood analysis suggest asymmetric [...] Read more.
For multivariate non-Gaussian involving copulas, likelihood inference is dominated by the data in the middle, and fitted models might not be very good for joint tail inference, such as assessing the strength of tail dependence. When preliminary data and likelihood analysis suggest asymmetric tail dependence, a method is proposed to improve extreme value inferences based on the joint lower and upper tails. A prior that uses previous information on tail dependence can be used in combination with the likelihood. With the combination of the prior and the likelihood (which in practice has some degree of misspecification) to obtain a tilted log-likelihood, inferences with suitably transformed parameters can be based on Bayesian computing methods or with numerical optimization of the tilted log-likelihood to obtain the posterior mode and Hessian at this mode. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

18 pages, 6928 KiB  
Article
Streamflow Prediction Using Complex Networks
by Abdul Wajed Farhat, B. Deepthi and Bellie Sivakumar
Entropy 2024, 26(7), 609; https://doi.org/10.3390/e26070609 - 18 Jul 2024
Viewed by 854
Abstract
The reliable prediction of streamflow is crucial for various water resources, environmental, and ecosystem applications. The current study employs a complex networks-based approach for the prediction of streamflow. The approach consists of three major steps: (1) the formation of a network using streamflow [...] Read more.
The reliable prediction of streamflow is crucial for various water resources, environmental, and ecosystem applications. The current study employs a complex networks-based approach for the prediction of streamflow. The approach consists of three major steps: (1) the formation of a network using streamflow time series; (2) the calculation of the clustering coefficient (CC) as a network measure; and (3) the use of a clustering coefficient-based nearest neighbor search procedure for streamflow prediction. For network construction, each timestep is considered as a node and the existence of link between any node pair is identified based on the difference (distance) between the streamflow values of the nodes. Different distance threshold values are used to identify the critical distance threshold to form the network. The complex networks-based approach is implemented for the prediction of daily streamflow at 142 stations in the contiguous United States. The prediction accuracy is quantified using three statistical measures: correlation coefficient (R), normalized root mean square error (NRMSE), and Nash–Sutcliffe efficiency (NSE). The influence of the number of neighbors on the prediction accuracy is also investigated. The results, obtained with the critical distance threshold, reveal that the clustering coefficients for the 142 stations range from 0.799 to 0.999. Overall, the prediction approach yields reasonably good results for all 142 stations, with R values ranging from 0.05 to 0.99, NRMSE values ranging from 0.1 to 12.3, and the NSE values ranging from −0.89 to 0.99. An attempt is also made to examine the relationship between prediction accuracy and the catchment characteristics/streamflow statistical properties (drainage area, mean flow, coefficient of variation of flow). The results suggest that the prediction accuracy does not have much of a relationship with the drainage area and the mean streamflow values, but with the coefficient of variation of flow. The outcomes from this study are certainly promising regarding the application of complex networks-based concepts for the prediction of streamflow (and other hydrologic) time series. Full article
(This article belongs to the Special Issue Nonlinear Dynamical Behaviors in Complex Systems)
Show Figures

Figure 1

14 pages, 251 KiB  
Article
Local versus Global Time in Early Relativity Theory
by Dennis Dieks
Entropy 2024, 26(7), 608; https://doi.org/10.3390/e26070608 - 18 Jul 2024
Viewed by 517
Abstract
In his groundbreaking 1905 paper on special relativity, Einstein distinguished between local and global time in inertial systems, introducing his famous definition of distant simultaneity to give physical content to the notion of global time. Over the following decade, Einstein attempted to generalize [...] Read more.
In his groundbreaking 1905 paper on special relativity, Einstein distinguished between local and global time in inertial systems, introducing his famous definition of distant simultaneity to give physical content to the notion of global time. Over the following decade, Einstein attempted to generalize this analysis of relativistic time to include accelerated frames of reference, which, according to the principle of equivalence, should also account for time in the presence of gravity. Characteristically, Einstein’s methodology during this period focused on simple, intuitively accessible physical situations, exhibiting a high degree of symmetry. However, in the final general theory of relativity, the a priori existence of such global symmetries cannot be assumed. Despite this, Einstein repeated some of his early reasoning patterns even in his 1916 review paper on general relativity and in later writings. Modern commentators have criticized these arguments as confused, invalid, and inconsistent. Here, we defend Einstein in the specific context of his use of global time and his derivations of the gravitational redshift formula. We argue that a detailed examination of Einstein’s early work clarifies his later reasoning and demonstrates its consistency and validity. Full article
(This article belongs to the Special Issue Time and Temporal Asymmetries)
18 pages, 4754 KiB  
Article
Evaluating the Attraction of Scenic Spots Based on Tourism Trajectory Entropy
by Qiuhua Huang, Linyuan Xia, Qianxia Li and Yixiong Xia
Entropy 2024, 26(7), 607; https://doi.org/10.3390/e26070607 - 18 Jul 2024
Viewed by 533
Abstract
With the development of positioning technology and the widespread application of mobile positioning terminal devices, the acquisition of trajectory data has become increasingly convenient. Furthermore, mining information related to scenic spots and tourists from trajectory data has also become increasingly convenient. This study [...] Read more.
With the development of positioning technology and the widespread application of mobile positioning terminal devices, the acquisition of trajectory data has become increasingly convenient. Furthermore, mining information related to scenic spots and tourists from trajectory data has also become increasingly convenient. This study used the normalization results of information entropy to evaluate the attraction of scenic spots and the experience index of tourists. Tourists and scenic spots were chosen as the probability variables to calculate information entropy, and the probability values of each variable were calculated according to certain methods. There is a certain competitive relationship between scenic spots of the same type. When the distance between various scenic spots is relatively close (less than 8 km), a strong cooperative relationship can be established. Scenic spots with various levels of attraction can generally be classified as follows: cultural heritage, natural landscape, and leisure and entertainment. Scenic spots with higher attraction are usually those with a higher A-level and convenient transportation. A considerable number of tourists do not choose to visit crowded scenic destinations but choose some spots that they are more interested in according to personal preferences and based on access to free travel. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

38 pages, 1053 KiB  
Article
Thompson Sampling for Stochastic Bandits with Noisy Contexts: An Information-Theoretic Regret Analysis
by Sharu Theresa Jose and Shana Moothedath
Entropy 2024, 26(7), 606; https://doi.org/10.3390/e26070606 - 17 Jul 2024
Cited by 2 | Viewed by 601
Abstract
We study stochastic linear contextual bandits (CB) where the agent observes a noisy version of the true context through a noise channel with unknown channel parameters. Our objective is to design an action policy that can “approximate” that of a Bayesian oracle that [...] Read more.
We study stochastic linear contextual bandits (CB) where the agent observes a noisy version of the true context through a noise channel with unknown channel parameters. Our objective is to design an action policy that can “approximate” that of a Bayesian oracle that has access to the reward model and the noise channel parameter. We introduce a modified Thompson sampling algorithm and analyze its Bayesian cumulative regret with respect to the oracle action policy via information-theoretic tools. For Gaussian bandits with Gaussian context noise, our information-theoretic analysis shows that under certain conditions on the prior variance, the Bayesian cumulative regret scales as O˜(mT), where m is the dimension of the feature vector and T is the time horizon. We also consider the problem setting where the agent observes the true context with some delay after receiving the reward, and show that delayed true contexts lead to lower regret. Finally, we empirically demonstrate the performance of the proposed algorithms against baselines. Full article
(This article belongs to the Special Issue Information Theoretic Learning with Its Applications)
Show Figures

Figure 1

15 pages, 7313 KiB  
Article
Modulated Radio Frequency Stealth Waveforms for Ultra-Wideband Radio Fuzes
by Kaiwei Wu, Bing Yang, Shijun Hao, Yanbin Liang and Zhonghua Huang
Entropy 2024, 26(7), 605; https://doi.org/10.3390/e26070605 - 17 Jul 2024
Viewed by 579
Abstract
The increasingly complex electromagnetic environment of modern warfare and the proliferation of intelligent jamming threaten to reduce the survival rate of radio fuzes on the battlefield. Radio frequency (RF) stealth technology can fundamentally improve the anti-interception and reconnaissance capabilities of radio fuzes, thereby [...] Read more.
The increasingly complex electromagnetic environment of modern warfare and the proliferation of intelligent jamming threaten to reduce the survival rate of radio fuzes on the battlefield. Radio frequency (RF) stealth technology can fundamentally improve the anti-interception and reconnaissance capabilities of radio fuzes, thereby lessening the probability of them being intercepted, recognized, and jammed by the enemy. In this paper, an RF stealth waveform based on chaotic pulse-position modulation is proposed for ultra-wideband (UWB) radio fuzes. Adding a perturbation signal based on the Tent map ensures that the chaotic sequences have sufficiently long periods despite hardware byte limitations. Measuring the approximate entropy and sequence period shows that the Tent map with the addition of perturbation signals can maintain good randomness under byte constraints, closely approximating the Tent map with ideal precision. Simulations verify that the proposed chaotic mapping used to modulate the pulse position of an ultra-wideband radio fuze signal results in superior detection, anti-interception, and anti-jamming performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

10 pages, 312 KiB  
Article
The Holographic Principle Comes from Finiteness of the Universe’s Geometry
by Arkady Bolotin
Entropy 2024, 26(7), 604; https://doi.org/10.3390/e26070604 - 17 Jul 2024
Viewed by 654
Abstract
Discovered as an apparent pattern, a universal relation between geometry and information called the holographic principle has yet to be explained. This relation is unfolded in the present paper. As it is demonstrated there, the origin of the holographic principle lies in the [...] Read more.
Discovered as an apparent pattern, a universal relation between geometry and information called the holographic principle has yet to be explained. This relation is unfolded in the present paper. As it is demonstrated there, the origin of the holographic principle lies in the fact that a geometry of physical space has only a finite number of points. Furthermore, it is shown that the puzzlement of the holographic principle can be explained by a magnification of grid cells used to discretize geometrical magnitudes such as areas and volumes into sets of points. To wit, when grid cells of the Planck scale are projected from the surface of the observable universe into its interior, they become enlarged. For that reason, the space inside the observable universe is described by the set of points whose cardinality is equal to the number of points that constitute the universe’s surface. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
20 pages, 2860 KiB  
Article
A Secure Image Encryption Scheme Based on a New Hyperchaotic System and 2D Compressed Sensing
by Muou Liu, Chongyang Ning and Congxu Zhu
Entropy 2024, 26(7), 603; https://doi.org/10.3390/e26070603 - 16 Jul 2024
Viewed by 832
Abstract
In insecure communication environments where the communication bandwidth is limited, important image data must be compressed and encrypted for transmission. However, existing image compression and encryption algorithms suffer from poor image reconstruction quality and insufficient image encryption security. To address these problems, this [...] Read more.
In insecure communication environments where the communication bandwidth is limited, important image data must be compressed and encrypted for transmission. However, existing image compression and encryption algorithms suffer from poor image reconstruction quality and insufficient image encryption security. To address these problems, this paper proposes an image-compression and encryption scheme based on a newly designed hyperchaotic system and two-dimensional compressed sensing (2DCS) technique. In this paper, the chaotic performance of this hyperchaotic system is verified by bifurcation diagrams, Lyapunov diagrams, approximate entropy, and permutation entropy, which have certain advantages over the traditional 2D chaotic system. The new 2D chaotic system as a pseudo-random number generator can completely pass all the test items of NIST. Meanwhile, this paper improves on the existing 2D projected gradient (2DPG) algorithm, which improves the quality of image compression and reconstruction, and can effectively reduce the transmission pressure of image data confidential communication. In addition, a new image encryption algorithm is designed for the new 2D chaotic system, and the security of the algorithm is verified by experiments such as key space size analysis and encrypted image information entropy. Full article
Show Figures

Figure 1

11 pages, 4783 KiB  
Article
Impact of Quantum Non-Locality and Electronic Non-Ideality on the Shannon Entropy for Atomic States in Dense Plasma
by Askhat T. Nuraly, Madina M. Seisembayeva, Karlygash N. Dzhumagulova and Erik O. Shalenov
Entropy 2024, 26(7), 602; https://doi.org/10.3390/e26070602 - 16 Jul 2024
Viewed by 642
Abstract
The influence of the collective and quantum effects on the Shannon information entropy for atomic states in dense nonideal plasma was investigated. The interaction potential, which takes into account the effect of quantum non-locality as well as electronic correlations, was used to solve [...] Read more.
The influence of the collective and quantum effects on the Shannon information entropy for atomic states in dense nonideal plasma was investigated. The interaction potential, which takes into account the effect of quantum non-locality as well as electronic correlations, was used to solve the Schrödinger equation for the hydrogen atom. It is shown that taking into account ionic screening leads to an increase in entropy, while taking into account only electronic screening does not lead to significant changes. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

17 pages, 1001 KiB  
Article
Enhanced Coexistence of Quantum Key Distribution and Classical Communication over Hollow-Core and Multi-Core Fibers
by Weiwen Kong, Yongmei Sun, Tianqi Dou, Yuheng Xie, Zhenhua Li, Yaoxian Gao, Qi Zhao, Na Chen, Wenpeng Gao, Yuanchen Hao, Peizhe Han, Yang Liu and Jianjun Tang
Entropy 2024, 26(7), 601; https://doi.org/10.3390/e26070601 - 15 Jul 2024
Viewed by 765
Abstract
In this paper, we investigate the impact of classical optical communications in quantum key distribution (QKD) over hollow-core fiber (HCF), multi-core fiber (MCF) and single-core fiber (SCF) and propose wavelength allocation schemes to enhance QKD performance. Firstly, we theoretically analyze noise interference in [...] Read more.
In this paper, we investigate the impact of classical optical communications in quantum key distribution (QKD) over hollow-core fiber (HCF), multi-core fiber (MCF) and single-core fiber (SCF) and propose wavelength allocation schemes to enhance QKD performance. Firstly, we theoretically analyze noise interference in QKD over HCF, MCF and SCF, such as spontaneous Raman scattering (SpRS) and four-wave mixing (FWM). To mitigate these noise types and optimize QKD performance, we propose a joint noise suppression wavelength allocation (JSWA) scheme. FWM noise suppression wavelength allocation and Raman noise suppression wavelength allocation are also proposed for comparison. The JSWA scheme indicates a significant enhancement in extending the simultaneous transmission distance of classical signals and QKD, reaching approximately 100 km in HCF and 165 km in MCF under a classical power per channel of 10 dBm. Therefore, MCF offers a longer secure transmission distance compared with HCF when classical signals and QKD coexist in the C-band. However, when classical signals are in the C-band and QKD operates in the O-band, the performance of QKD in HCF surpasses that in MCF. This research establishes technical foundations for the design and deployment of QKD optical networks. Full article
(This article belongs to the Special Issue Classical and Quantum Networks: Theory, Modeling and Optimization)
Show Figures

Figure 1

12 pages, 551 KiB  
Article
Analyzing Sequential Betting with a Kelly-Inspired Convective-Diffusion Equation
by Darrell Velegol and Kyle J. M. Bishop
Entropy 2024, 26(7), 600; https://doi.org/10.3390/e26070600 - 15 Jul 2024
Viewed by 628
Abstract
The purpose of this article is to analyze a sequence of independent bets by modeling it with a convective-diffusion equation (CDE). The approach follows the derivation of the Kelly Criterion (i.e., with a binomial distribution for the numbers of wins and losses in [...] Read more.
The purpose of this article is to analyze a sequence of independent bets by modeling it with a convective-diffusion equation (CDE). The approach follows the derivation of the Kelly Criterion (i.e., with a binomial distribution for the numbers of wins and losses in a sequence of bets) and reframes it as a CDE in the limit of many bets. The use of the CDE clarifies the role of steady growth (characterized by a velocity U) and random fluctuations (characterized by a diffusion coefficient D) to predict a probability distribution for the remaining bankroll as a function of time. Whereas the Kelly Criterion selects the investment fraction that maximizes the median bankroll (0.50 quantile), we show that the CDE formulation can readily find an optimum betting fraction f for any quantile. We also consider the effects of “ruin” using an absorbing boundary condition, which describes the termination of the betting sequence when the bankroll becomes too small. We show that the probability of ruin can be expressed by a dimensionless Péclet number characterizing the relative rates of convection and diffusion. Finally, the fractional Kelly heuristic is analyzed to show how it impacts returns and ruin. The reframing of the Kelly approach with the CDE opens new possibilities to use known results from the chemico-physical literature to address sequential betting problems. Full article
(This article belongs to the Special Issue Monte Carlo Simulation in Statistical Physics)
Show Figures

Figure 1

17 pages, 452 KiB  
Article
Bootstrap Approximation of Model Selection Probabilities for Multimodel Inference Frameworks
by Andres Dajles and Joseph Cavanaugh
Entropy 2024, 26(7), 599; https://doi.org/10.3390/e26070599 - 15 Jul 2024
Viewed by 587
Abstract
Most statistical modeling applications involve the consideration of a candidate collection of models based on various sets of explanatory variables. The candidate models may also differ in terms of the structural formulations for the systematic component and the posited probability distributions for the [...] Read more.
Most statistical modeling applications involve the consideration of a candidate collection of models based on various sets of explanatory variables. The candidate models may also differ in terms of the structural formulations for the systematic component and the posited probability distributions for the random component. A common practice is to use an information criterion to select a model from the collection that provides an optimal balance between fidelity to the data and parsimony. The analyst then typically proceeds as if the chosen model was the only model ever considered. However, such a practice fails to account for the variability inherent in the model selection process, which can lead to inappropriate inferential results and conclusions. In recent years, inferential methods have been proposed for multimodel frameworks that attempt to provide an appropriate accounting of modeling uncertainty. In the frequentist paradigm, such methods should ideally involve model selection probabilities, i.e., the relative frequencies of selection for each candidate model based on repeated sampling. Model selection probabilities can be conveniently approximated through bootstrapping. When the Akaike information criterion is employed, Akaike weights are also commonly used as a surrogate for selection probabilities. In this work, we show that the conventional bootstrap approach for approximating model selection probabilities is impacted by bias. We propose a simple correction to adjust for this bias. We also argue that Akaike weights do not provide adequate approximations for selection probabilities, although they do provide a crude gauge of model plausibility. Full article
Show Figures

Figure 1

22 pages, 1523 KiB  
Article
Research on a Three-Way Decision-Making Approach, Based on Non-Additive Measurement and Prospect Theory, and Its Application in Aviation Equipment Risk Analysis
by Ruicong Xia, Sirong Tong, Qiang Wang, Bingzhen Sun, Ziling Xu, Qiuhan Liu, Jiayang Yu and Fan Wu
Entropy 2024, 26(7), 598; https://doi.org/10.3390/e26070598 - 14 Jul 2024
Viewed by 715
Abstract
Due to the information non-independence of attributes, combined with a complex and changeable environment, the analysis of risks faces great difficulties. In view of this problem, this paper proposes a new three-way decision-making (3WD) method, combined with prospect theory and a non-additive measure, [...] Read more.
Due to the information non-independence of attributes, combined with a complex and changeable environment, the analysis of risks faces great difficulties. In view of this problem, this paper proposes a new three-way decision-making (3WD) method, combined with prospect theory and a non-additive measure, to cope with multi-source and incomplete risk information systems. Prospect theory improves the loss function of the original 3WD model, and the combination of non-additive measurement and probability measurement provides a new perspective to understand the meaning of decision-making, which could measure the relative degree by considering expert knowledge and objective data. The theoretical basis and framework of this model are illustrated, and this model is applied to a real in-service aviation equipment structures risk evaluation problem involving multiple incomplete risk information sources. When the simulation analysis is carried out, the results show that the availability of this method is verified. This method can also evaluate and rank key risk factors in equipment structures, which provides a reliable basis for decisions in aviation safety management. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

31 pages, 28677 KiB  
Article
Color Image Encryption Based on an Evolutionary Codebook and Chaotic Systems
by Yuan Cao and Yinglei Song
Entropy 2024, 26(7), 597; https://doi.org/10.3390/e26070597 - 12 Jul 2024
Viewed by 537
Abstract
Encryption of images is an important method that can effectively improve the security and privacy of crucial image data. Existing methods generally encrypt an image with a combination of scrambling and encoding operations. Currently, many applications require highly secure results for image encryption. [...] Read more.
Encryption of images is an important method that can effectively improve the security and privacy of crucial image data. Existing methods generally encrypt an image with a combination of scrambling and encoding operations. Currently, many applications require highly secure results for image encryption. New methods that can achieve improved randomness for both the scrambling and encoding processes in encryption are thus needed to further enhance the security of a cipher image. This paper proposes a new method that can securely encrypt color images. As the first step of the proposed method, a complete bit-level operation is utilized to scramble the binary bits in a color image to a full extent. For the second step, the bits in the scrambled image are processed with a sweeping operation to improve the encryption security. In the final step of encryption, a codebook that varies with evolutionary operations based on several chaotic systems is utilized to encrypt the partially encrypted image obtained in the second step. Experimental results on benchmark color images suggest that this new approach can securely encrypt color images and generate cipher images that remain secure under different types of attacks. The proposed approach is compared with several other state-of-the-art encryption approaches and the results show that it can achieve improved encryption security for cipher images. Experimental results thus suggest that this new approach can possibly be utilized practically in applications where color images need to be encrypted for content protection. Full article
Show Figures

Figure 1

24 pages, 517 KiB  
Review
A Survey on Error Exponents in Distributed Hypothesis Testing: Connections with Information Theory, Interpretations, and Applications
by Sebastián Espinosa, Jorge F. Silva and Sandra Céspedes
Entropy 2024, 26(7), 596; https://doi.org/10.3390/e26070596 - 12 Jul 2024
Viewed by 534
Abstract
A central challenge in hypothesis testing (HT) lies in determining the optimal balance between Type I (false positive) and Type II (non-detection or false negative) error probabilities. Analyzing these errors’ exponential rate of convergence, known as error exponents, provides crucial insights into system [...] Read more.
A central challenge in hypothesis testing (HT) lies in determining the optimal balance between Type I (false positive) and Type II (non-detection or false negative) error probabilities. Analyzing these errors’ exponential rate of convergence, known as error exponents, provides crucial insights into system performance. Error exponents offer a lens through which we can understand how operational restrictions, such as resource constraints and impairments in communications, affect the accuracy of distributed inference in networked systems. This survey presents a comprehensive review of key results in HT, from the foundational Stein’s Lemma to recent advancements in distributed HT, all unified through the framework of error exponents. We explore asymptotic and non-asymptotic results, highlighting their implications for designing robust and efficient networked systems, such as event detection through lossy wireless sensor monitoring networks, collective perception-based object detection in vehicular environments, and clock synchronization in distributed environments, among others. We show that understanding the role of error exponents provides a valuable tool for optimizing decision-making and improving the reliability of networked systems. Full article
(This article belongs to the Special Issue Entropy-Based Statistics and Their Applications)
Show Figures

Figure 1

11 pages, 1492 KiB  
Article
Characteristic Extraction and Assessment Methods for Transformers DC Bias Caused by Metro Stray Currents
by Aimin Wang, Sheng Lin, Guoxing Wu, Xiaopeng Li and Tao Wang
Entropy 2024, 26(7), 595; https://doi.org/10.3390/e26070595 - 11 Jul 2024
Viewed by 493
Abstract
Metro stray currents flowing into transformer-neutral points cause the high neutral DC and a transformer to operate in the DC bias state.Because neutral DC caused by stray current varies with time, the neutral DC value cannot be used as the only characteristic indicator [...] Read more.
Metro stray currents flowing into transformer-neutral points cause the high neutral DC and a transformer to operate in the DC bias state.Because neutral DC caused by stray current varies with time, the neutral DC value cannot be used as the only characteristic indicator to evaluate the DC bias risk level. Thus, unified characteristic extraction and assessment methods are proposed to evaluate the DC bias risk of a transformer caused by stray current, considering the signals of transformer-neutral DC and vibration. In the characteristic extraction method, the primary characteristics are obtained by comparing the magnitude and frequency distributions of transformer-neutral DC and vibration with and without metro stray current invasion. By analyzing the correlation coefficients, the final characteristics are obtained by clustering the primary characteristics with high correlation. Then, the magnitude and frequency characteristics are extracted and used as indicators to evaluate the DC bias risk. Moreover, to avoid the influence of manual experience on indicator weights, the entropy weight method (EWM) is used to establish the assessment model. Finally, the proposed methods are applied based on the neutral DC and vibration test data of a certain transformer. The results show that the characteristic indicators can be extracted, and the transformer DC bias risk can be evaluated by using the proposed methods. Full article
Show Figures

Figure 1

11 pages, 207 KiB  
Article
Temporal Direction, Intuitionism and Physics
by Yuval Dolev
Entropy 2024, 26(7), 594; https://doi.org/10.3390/e26070594 - 11 Jul 2024
Viewed by 420
Abstract
In a recent paper, Nicolas Gisin suggests that by conducting physics with intuitionistic rather than classical mathematics, rich temporality—that is, passage and tense, and specifically the future’s openness—can be incorporated into physics. Physics based on classical mathematics is tenseless and deterministic, and that, [...] Read more.
In a recent paper, Nicolas Gisin suggests that by conducting physics with intuitionistic rather than classical mathematics, rich temporality—that is, passage and tense, and specifically the future’s openness—can be incorporated into physics. Physics based on classical mathematics is tenseless and deterministic, and that, so he holds, renders it incongruent with experience. According to Gisin, physics ought to represent the indeterminate nature of reality, and he proposes that intuitionistic mathematics is the key to succeeding in doing so. While I share his insistence on the reality of passage and tense and on the future being real and open, I argue that the amendment he offers does not work. I show that, its attunement to time notwithstanding, intuitionistic mathematics is as tenseless as classical mathematics and that physics is bound to remain tenseless regardless of the math it employs. There is much to learn about tensed time, but the task belongs to phenomenology and not to physics. Full article
(This article belongs to the Special Issue Time and Temporal Asymmetries)
16 pages, 479 KiB  
Article
NodeFlow: Towards End-to-End Flexible Probabilistic Regression on Tabular Data
by Patryk Wielopolski, Oleksii Furman and Maciej Zięba
Entropy 2024, 26(7), 593; https://doi.org/10.3390/e26070593 - 11 Jul 2024
Viewed by 636
Abstract
We introduce NodeFlow, a flexible framework for probabilistic regression on tabular data that combines Neural Oblivious Decision Ensembles (NODEs) and Conditional Continuous Normalizing Flows (CNFs). It offers improved modeling capabilities for arbitrary probabilistic distributions, addressing the limitations of traditional parametric approaches. In NodeFlow, [...] Read more.
We introduce NodeFlow, a flexible framework for probabilistic regression on tabular data that combines Neural Oblivious Decision Ensembles (NODEs) and Conditional Continuous Normalizing Flows (CNFs). It offers improved modeling capabilities for arbitrary probabilistic distributions, addressing the limitations of traditional parametric approaches. In NodeFlow, the NODE captures complex relationships in tabular data through a tree-like structure, while the conditional CNF utilizes the NODE’s output space as a conditioning factor. The training process of NodeFlow employs standard gradient-based learning, facilitating the end-to-end optimization of the NODEs and CNF-based density estimation. This approach ensures outstanding performance, ease of implementation, and scalability, making NodeFlow an appealing choice for practitioners and researchers. Comprehensive assessments on benchmark datasets underscore NodeFlow’s efficacy, revealing its achievement of state-of-the-art outcomes in multivariate probabilistic regression setup and its strong performance in univariate regression tasks. Furthermore, ablation studies are conducted to justify the design choices of NodeFlow. In conclusion, NodeFlow’s end-to-end training process and strong performance make it a compelling solution for practitioners and researchers. Additionally, it opens new avenues for research and application in the field of probabilistic regression on tabular data. Full article
(This article belongs to the Special Issue Deep Generative Modeling: Theory and Applications)
Show Figures

Figure 1

26 pages, 5970 KiB  
Review
Superconducting Quantum Simulation for Many-Body Physics beyond Equilibrium
by Yunyan Yao and Liang Xiang
Entropy 2024, 26(7), 592; https://doi.org/10.3390/e26070592 - 11 Jul 2024
Viewed by 791
Abstract
Quantum computing is an exciting field that uses quantum principles, such as quantum superposition and entanglement, to tackle complex computational problems. Superconducting quantum circuits, based on Josephson junctions, is one of the most promising physical realizations to achieve the long-term goal of building [...] Read more.
Quantum computing is an exciting field that uses quantum principles, such as quantum superposition and entanglement, to tackle complex computational problems. Superconducting quantum circuits, based on Josephson junctions, is one of the most promising physical realizations to achieve the long-term goal of building fault-tolerant quantum computers. The past decade has witnessed the rapid development of this field, where many intermediate-scale multi-qubit experiments emerged to simulate nonequilibrium quantum many-body dynamics that are challenging for classical computers. Here, we review the basic concepts of superconducting quantum simulation and their recent experimental progress in exploring exotic nonequilibrium quantum phenomena emerging in strongly interacting many-body systems, e.g., many-body localization, quantum many-body scars, and discrete time crystals. We further discuss the prospects of quantum simulation experiments to truly solve open problems in nonequilibrium many-body systems. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
Show Figures

Figure 1

14 pages, 3225 KiB  
Article
ESE-YOLOv8: A Novel Object Detection Algorithm for Safety Belt Detection during Working at Heights
by Qirui Zhou, Dandan Liu and Kang An
Entropy 2024, 26(7), 591; https://doi.org/10.3390/e26070591 - 11 Jul 2024
Viewed by 558
Abstract
To address the challenges associated with supervising workers who wear safety belts while working at heights, this study proposes a solution involving the utilization of an object detection model to replace manual supervision. A novel object detection model, named ESE-YOLOv8, is introduced. The [...] Read more.
To address the challenges associated with supervising workers who wear safety belts while working at heights, this study proposes a solution involving the utilization of an object detection model to replace manual supervision. A novel object detection model, named ESE-YOLOv8, is introduced. The integration of the Efficient Multi-Scale Attention (EMA) mechanism within this model enhances information entropy through cross-channel interaction and encodes spatial information into the channels, thereby enabling the model to obtain rich and significant information during feature extraction. By employing GSConv to reconstruct the neck into a slim-neck configuration, the computational load of the neck is reduced without the loss of information entropy, allowing the attention mechanism to function more effectively, thereby improving accuracy. During the model training phase, a regression loss function named the Efficient Intersection over Union (EIoU) is employed to further refine the model’s object localization capabilities. Experimental results demonstrate that the ESE-YOLOv8 model achieves an average precision of 92.7% at an IoU threshold of 50% and an average precision of 75.7% within the IoU threshold range of 50% to 95%. These results surpass the performance of the baseline model, the widely utilized YOLOv5 and demonstrate competitiveness among state-of-the-art models. Ablation experiments further confirm the effectiveness of the model’s enhancements. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

29 pages, 2269 KiB  
Article
A Conditional Privacy-Preserving Identity-Authentication Scheme for Federated Learning in the Internet of Vehicles
by Shengwei Xu and Runsheng Liu
Entropy 2024, 26(7), 590; https://doi.org/10.3390/e26070590 - 10 Jul 2024
Viewed by 510
Abstract
With the rapid development of artificial intelligence and Internet of Things (IoT) technologies, automotive companies are integrating federated learning into connected vehicles to provide users with smarter services. Federated learning enables vehicles to collaboratively train a global model without sharing sensitive local data, [...] Read more.
With the rapid development of artificial intelligence and Internet of Things (IoT) technologies, automotive companies are integrating federated learning into connected vehicles to provide users with smarter services. Federated learning enables vehicles to collaboratively train a global model without sharing sensitive local data, thereby mitigating privacy risks. However, the dynamic and open nature of the Internet of Vehicles (IoV) makes it vulnerable to potential attacks, where attackers may intercept or tamper with transmitted local model parameters, compromising their integrity and exposing user privacy. Although existing solutions like differential privacy and encryption can address these issues, they may reduce data usability or increase computational complexity. To tackle these challenges, we propose a conditional privacy-preserving identity-authentication scheme, CPPA-SM2, to provide privacy protection for federated learning. Unlike existing methods, CPPA-SM2 allows vehicles to participate in training anonymously, thereby achieving efficient privacy protection. Performance evaluations and experimental results demonstrate that, compared to state-of-the-art schemes, CPPA-SM2 significantly reduces the overhead of signing, verification and communication while achieving more security features. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

29 pages, 6201 KiB  
Article
BPT-PLR: A Balanced Partitioning and Training Framework with Pseudo-Label Relaxed Contrastive Loss for Noisy Label Learning
by Qian Zhang, Ge Jin, Yi Zhu, Hongjian Wei and Qiu Chen
Entropy 2024, 26(7), 589; https://doi.org/10.3390/e26070589 - 10 Jul 2024
Cited by 1 | Viewed by 705
Abstract
While collecting training data, even with the manual verification of experts from crowdsourcing platforms, eliminating incorrect annotations (noisy labels) completely is difficult and expensive. In dealing with datasets that contain noisy labels, over-parameterized deep neural networks (DNNs) tend to overfit, leading to poor [...] Read more.
While collecting training data, even with the manual verification of experts from crowdsourcing platforms, eliminating incorrect annotations (noisy labels) completely is difficult and expensive. In dealing with datasets that contain noisy labels, over-parameterized deep neural networks (DNNs) tend to overfit, leading to poor generalization and classification performance. As a result, noisy label learning (NLL) has received significant attention in recent years. Existing research shows that although DNNs eventually fit all training data, they first prioritize fitting clean samples, then gradually overfit to noisy samples. Mainstream methods utilize this characteristic to divide training data but face two issues: class imbalance in the segmented data subsets and the optimization conflict between unsupervised contrastive representation learning and supervised learning. To address these issues, we propose a Balanced Partitioning and Training framework with Pseudo-Label Relaxed contrastive loss called BPT-PLR, which includes two crucial processes: a balanced partitioning process with a two-dimensional Gaussian mixture model (BP-GMM) and a semi-supervised oversampling training process with a pseudo-label relaxed contrastive loss (SSO-PLR). The former utilizes both semantic feature information and model prediction results to identify noisy labels, introducing a balancing strategy to maintain class balance in the divided subsets as much as possible. The latter adopts the latest pseudo-label relaxed contrastive loss to replace unsupervised contrastive loss, reducing optimization conflicts between semi-supervised and unsupervised contrastive losses to improve performance. We validate the effectiveness of BPT-PLR on four benchmark datasets in the NLL field: CIFAR-10/100, Animal-10N, and Clothing1M. Extensive experiments comparing with state-of-the-art methods demonstrate that BPT-PLR can achieve optimal or near-optimal performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

18 pages, 4135 KiB  
Article
Effective Temporal Graph Learning via Personalized PageRank
by Ziyu Liao, Tao Liu, Yue He and Longlong Lin
Entropy 2024, 26(7), 588; https://doi.org/10.3390/e26070588 - 10 Jul 2024
Viewed by 498
Abstract
Graph representation learning aims to map nodes or edges within a graph using low-dimensional vectors, while preserving as much topological information as possible. During past decades, numerous algorithms for graph representation learning have emerged. Among them, proximity matrix representation methods have been shown [...] Read more.
Graph representation learning aims to map nodes or edges within a graph using low-dimensional vectors, while preserving as much topological information as possible. During past decades, numerous algorithms for graph representation learning have emerged. Among them, proximity matrix representation methods have been shown to exhibit excellent performance in experiments and scale to large graphs with millions of nodes. However, with the rapid development of the Internet, information interactions are happening at the scale of billions every moment. Most methods for similarity matrix factorization still focus on static graphs, leading to incomplete similarity descriptions and low embedding quality. To enhance the embedding quality of temporal graph learning, we propose a temporal graph representation learning model based on the matrix factorization of Time-constrained Personalize PageRank (TPPR) matrices. TPPR, an extension of personalized PageRank (PPR) that incorporates temporal information, better captures node similarities in temporal graphs. Based on this, we use Single Value Decomposition or Nonnegative Matrix Factorization to decompose TPPR matrices to obtain embedding vectors for each node. Through experiments on tasks such as link prediction, node classification, and node clustering across multiple temporal graphs, as well as a comparison with various experimental methods, we find that graph representation learning algorithms based on TPPR matrix factorization achieve overall outstanding scores on multiple temporal datasets, highlighting their effectiveness. Full article
Show Figures

Figure 1

12 pages, 377 KiB  
Article
Biswas–Chatterjee–Sen Model on Solomon Networks with Two Three-Dimensional Lattices
by Gessineide Sousa Oliveira, Tayroni Alencar Alves, Gladstone Alencar Alves, Francisco Welington Lima and Joao Antonio Plascak
Entropy 2024, 26(7), 587; https://doi.org/10.3390/e26070587 - 10 Jul 2024
Viewed by 436
Abstract
The Biswas–Chatterjee–Sen (BChS) model of opinion dynamics has been studied on three-dimensional Solomon networks by means of extensive Monte Carlo simulations. Finite-size scaling relations for different lattice sizes have been used in order to obtain the relevant quantities of the system in the [...] Read more.
The Biswas–Chatterjee–Sen (BChS) model of opinion dynamics has been studied on three-dimensional Solomon networks by means of extensive Monte Carlo simulations. Finite-size scaling relations for different lattice sizes have been used in order to obtain the relevant quantities of the system in the thermodynamic limit. From the simulation data it is clear that the BChS model undergoes a second-order phase transition. At the transition point, the critical exponents describing the behavior of the order parameter, the corresponding order parameter susceptibility, and the correlation length, have been evaluated. From the values obtained for these critical exponents one can confidently conclude that the BChS model in three dimensions is in a different universality class to the respective model defined on one- and two-dimensional Solomon networks, as well as in a different universality class as the usual Ising model on the same networks. Full article
Show Figures

Figure 1

20 pages, 931 KiB  
Article
Synergistic Dynamical Decoupling and Circuit Design for Enhanced Algorithm Performance on Near-Term Quantum Devices
by Yanjun Ji and Ilia Polian
Entropy 2024, 26(7), 586; https://doi.org/10.3390/e26070586 - 10 Jul 2024
Viewed by 561
Abstract
Dynamical decoupling (DD) is a promising technique for mitigating errors in near-term quantum devices. However, its effectiveness depends on both hardware characteristics and algorithm implementation details. This paper explores the synergistic effects of dynamical decoupling and optimized circuit design in maximizing the performance [...] Read more.
Dynamical decoupling (DD) is a promising technique for mitigating errors in near-term quantum devices. However, its effectiveness depends on both hardware characteristics and algorithm implementation details. This paper explores the synergistic effects of dynamical decoupling and optimized circuit design in maximizing the performance and robustness of algorithms on near-term quantum devices. By utilizing eight IBM quantum devices, we analyze how hardware features and algorithm design impact the effectiveness of DD for error mitigation. Our analysis takes into account factors such as circuit fidelity, scheduling duration, and hardware-native gate set. We also examine the influence of algorithmic implementation details, including specific gate decompositions, DD sequences, and optimization levels. The results reveal an inverse relationship between the effectiveness of DD and the inherent performance of the algorithm. Furthermore, we emphasize the importance of gate directionality and circuit symmetry in improving performance. This study offers valuable insights for optimizing DD protocols and circuit designs, highlighting the significance of a holistic approach that leverages both hardware features and algorithm design for the high-quality and reliable execution of near-term quantum algorithms. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
Show Figures

Figure 1

17 pages, 1350 KiB  
Article
Optimization and Evaluation of Tourism Mascot Design Based on Analytic Hierarchy Process–Entropy Weight Method
by Jing Wang, Fangmin Cheng and Chen Chen
Entropy 2024, 26(7), 585; https://doi.org/10.3390/e26070585 - 9 Jul 2024
Viewed by 624
Abstract
With the tourism industry continuing to boom, the importance of tourism mascots in promoting and publicizing tourism destinations is becoming increasingly prominent. Three core dimensions, market trend, appearance design, and audience feedback, are numerically investigated for deeply iterating tourism mascot design. Further, a [...] Read more.
With the tourism industry continuing to boom, the importance of tourism mascots in promoting and publicizing tourism destinations is becoming increasingly prominent. Three core dimensions, market trend, appearance design, and audience feedback, are numerically investigated for deeply iterating tourism mascot design. Further, a subjective and objective evaluation weighting model based on the hierarchical analysis method (AHP) and entropy weighting method is proposed, aiming to utilize the advantages of these methods and ensure the entireness and correctness of results. Taking the mascots of six famous tourist attractions in Xi’an as an example, the feasibility and effectiveness of the evaluation model are verified. Data analysis and modeling results confirm that the three core evaluation indexes of scalability, innovation, and recommendation should be focused on in the design of tourism mascots in the three dimensions of market trends, appearance design, and audience feedback. The evaluation index scores are 0.1235, 0.1170, and 0.1123, respectively, which further illustrates the priority of mascot design. The evaluation model constructed by the research provides decision-makers with a comprehensive evaluation tool from the perspective of tourist experience, and also effectively assists the optimization process of mascot design. In addition, the model has good versatility and adaptability in structural design and evaluation logic and can be widely used in the optimization and evaluation research of brand mascots. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 2914 KiB  
Article
Adaptive Segmented Aggregation and Rate Assignment Techniques for Flexible-Length Polar Codes
by Souradip Saha, Shubham Mahajan, Marc Adrat and Wolfgang Gerstacker
Entropy 2024, 26(7), 584; https://doi.org/10.3390/e26070584 - 9 Jul 2024
Viewed by 435
Abstract
Polar codes have garnered a lot of attention from the scientific community, owing to their low-complexity implementation and provable capacity achieving capability. They have been standardized to be used for encoding information on the control channels in 5G wireless networks due to their [...] Read more.
Polar codes have garnered a lot of attention from the scientific community, owing to their low-complexity implementation and provable capacity achieving capability. They have been standardized to be used for encoding information on the control channels in 5G wireless networks due to their robustness for short codeword lengths. The conventional approach to generate polar codes is to recursively use 2×2 kernels and polarize channel capacities. This approach however, has a limitation of only having the ability to generate codewords of length Norig=2n form. In order to mitigate this limitation, multiple techniques have been developed, e.g., polarization kernels of larger sizes, multi-kernel polar codes, and downsizing techniques like puncturing or shortening. However, the availability of so many design options and parameters, in turn makes the choice of design parameters quite challenging. In this paper, the authors propose a novel polar code construction technique called Adaptive Segmented Aggregation which generates polar codewords of any arbitrary codeword length. This approach involves dividing the entire codeword into smaller segments that can be independently encoded and decoded, thereby aggregated for channel processing. Additionally a rate assignment methodology has been derived for the proposed technique, that is tuned to the design requirement. Full article
(This article belongs to the Special Issue New Advances in Error-Correcting Codes)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop