We have calculated the perturbative corrections to all the structure functions in the semileptonic decays of a heavy quark. Assuming an arbitrary gluon mass as a technical tool allowed to obtain in parallel all the BLM corrections. We report the basic applications, viz. perturbative corrections to the hadronic mass and energy moments with full dependence on the charge lepton energy cut. In the adopted scheme with the OPE momentum scale separation around 1 GeV the perturbative corrections to are small and practically independent of Ecut; the BLM corrections are small, too. The corrections to the second mass squared moment show some decrease with Ecut consistent with the effect of the Darwin operator, within the previously estimated theoretical uncertainty. Perturbative corrections in the pole-type schemes appear significant and vary with Eℓ, decreasing the moments at higher cuts. The hardness of hadronic moments is quantitatively illustrated for different cuts on Eℓ.
We study the moments of multiplicty distribution and its relation to the Lee-Yang zeros of the generating function in electron-positron and hadron-hadron high energy collision. Our work shows that GMD moments can reproduce the oscillatory behaviour as shown in the experimental data and predicted by quantum chromodynamics at preasymptotic energy, while it can also be used to distinguish electron-positron (e+e-) multiplicity data from hadron-hadron (pp and ) multiplicity. Furthermore, there seems to be a link between the development of shoulder structure in the multiplicity distribution and the development of ear structure in the Lee-Yang zeros. We predict that these structures is going to be very obvious at 14 TeV. We argue that the development of these structures indicates an ongoing transition from quark-dominated soft scattering events to gluon-dominated semihard scattering events.
This work reports first-principle calculations for LiMgP half-Heusler compound doped by the transition metal elements Cr, Mn, Co and Ni motivated by present findings, in which the ferromagnetism conduct is predicted. The studied LiMg0.95Y0.05P alloy (Y=Cr, Mn, Co and Ni) showed the ferromagnetic behavior. The calculations revealed that the main contributions to the net magnetization come from Cr, Mn, Co and Ni. The Cr2+ will have four electrons, in which 2 electrons are in e+ and other 2 occupy the t+2. Then, this orbital is set on the Fermi level. For LiMg0.95Co0.05P alloy, the half-metallic conduct is estimated with 100% polarized on the downside of the Fermi level. Also, LiMg0.95Ni0.05P alloy exhibits the half-metallic conduct on the downside of the Fermi level which is occupied by t−2 minority state. This study stated that electronegativity and magnetic properties have correlation with regard to Cr-, Mn-, Co- and Ni-doped LiMgP, in which the trends of partial moments, electronegativity and total moments are MNi<MCo<MCr<MMn; χNi>χCo>χCr>χMn and MTotLiMg0.95Ni0.05P<MTotLiMg0.95Co0.05P<MTotLiMg0.95Cr0.05P<MTotLiMg0.95Mn0.05P.
This paper aims to investigate the behavior of LiMgN half-Heusler (HH) semiconductor doped by transition metals (TM = Mn, Fe, Co and Ni). HHs belong to the Fˉ43m space group (No. 216) and have a zinc blende structure that can be described by the chemical symbol XYZ. The research methodology utilized in this investigation involves theoretical analysis based on the principles of density functional theory (DFT). The studied LiMg0.95TM0.05N alloy displayed the half-metallicity behavior when TM = Fe, Co and Ni. Hence, these systems could be a promising candidate in spintronic application thanks to their ferromagnetism. The principal contribution to magnetism in the full LiMg0.95TM0.05N alloys comes from the Mn, Fe, Co and Ni doping. The partial magnetic moments of these elements are significantly greater than the combined partial magnetic moments of Li, Mg and N. When comparing LiMg0.95Mn0.05N to LiMg0.95Fe0.05N, 5Co0.05N and LiMg0.95Ni0.05N, it is important to note that the exchange splitting energy ΔTM(e+,e−) associated to their spin up and spin down were discussed. The variation of Mn(3d) in relation to (e+,e−) is larger than that of Fe, Co and Ni. Therefore, ΔMn(e+,e−)>ΔFe(e+,e−)>ΔCo(e+,e−)>ΔNi(e+,e−). Furthermore, there is a correlation between the magnetic moment and electronegativity trend of the TM dopant. Specially, the electronegativity trend (χTM) is well matched with the total spin moment trend, where χNi>χCo>χFe>χMn.
Moments are widely used in pattern recognition, image processing, computer vision and multiresolution analysis. To clarify and to guide the use of different types of moments, we present in this paper a study on the different moments and compare their behavior. After an introduction to geometric, Legendre, Hermite and Gaussian–Hermite moments and their calculation, we analyze at first their behavior in spatial domain. Our analysis shows orthogonal moment base functions of different orders having different number of zero-crossings and very different shapes, therefore they can better separate image features based on different modes, which is very interesting for pattern analysis and shape classification. Moreover, Gaussian–Hermite moment base functions are much more smoothed, they are thus less sensitive to noise and avoid the artifacts introduced by window function discontinuity. We then analyze the spectral behavior of moments in frequency domain. Theoretical and numerical analyses show that orthogonal Legendre and Gaussian–Hermite moments of different orders separate different frequency bands more effectively. It is also shown that Gaussian–Hermite moments present an approach to construct orthogonal features from the results of wavelet analysis. The orthogonality equivalence theorem is also presented. Our analysis is confirmed by numerical results, which are then reported.
The capability of Kasuba's Simplified Fuzzy ARTMAP (SFAM) to behave as a Pattern Recognizer/Classifier of images both noisy and noise free has been investigated in this paper. This calls for augmenting the original Neuro–Fuzzy model with a modified moment-based RST invariant feature extractor.
The potential of the SFAM based Pattern Recognizer to recognize patterns — monochrome and color, noisy and noise free — has been studied on two experimental problems. The first experiment which concerns monochrome images, pertains to recognition of satellite images, a problem discussed by Wang et al. The second experiment, which concerns color images, deals with the recognition of some sample test colored patterns. The results of the computer simulation have also been presented.
An experimental analysis of shape classification methods based on moment and autoregressive (AR) invariants is presented. Various types of translation, scale and rotation invariants are used to construct feature vectors for classification. The performance is evaluated using five different objects picked up from real scenes with a TV camera. Silhouettes and contours are extracted from nonoccluded two-dimensional (2D) objects rotated, scaled and translated in 3D space. The feature extraction methods are implemented and systematically tested using several parametric and nonparametric classifiers. The results clearly show the advantage of the method based on the moment invariants.
A closed form solution for characterizing voltage-based signals in an RLC tree is presented. The closed form solution is used to derive figures of merit to characterize the effects of inductance at a specific node in an RLC tree. The effective damping factor of the signal at a specific node in an RLC tree is shown to be one useful figure of merit. It is shown that as the effective damping factor of a signal increases, an RC model is sufficiently accurate to characterize the waveform. The rise time of the input signal driving an RLC tree is shown to be a second factor that affects the relative significance of inductance. As the rise time of the input signal increases as compared to the effective LC time constant at a specific node within an RLC tree, the signal at this node will no longer exhibit the effects of inductance. It is demonstrated that a single line analysis to determine the importance of including inductance to characterize an interconnect line that is a part of a tree is invalid in many cases and can lead to erroneous conclusions. The error exhibited by single line analysis is due to the large interaction among the branches of the tree.
It is well known that classical systems governed by ODE or PDE can have extremely complex emergent properties. Many researchers have asked: is it possible that the statistical correlations which emerge over time in classical systems would allow effects as complex as those generated by quantum field theory (QFT)? For example, could parallel computation based on classical statistical correlations in systems based on continuous variables, distributed over space, possibly be as powerful as quantum computing based on entanglement? This paper proves that the answer to this question is essentially "yes," with certain caveats.
More precisely, the paper shows that the statistics of many classical ODE and PDE systems obey dynamics remarkably similar to the Heisenberg dynamics of the corresponding quantum field theory (QFT). It supports Einstein's conjecture that much of quantum mechanics may be derived as a statistical formalism describing the dynamics of classical systems.
Predictions of QFT result from combining quantum dynamics with quantum measurement rules. Bell's Theorem experiments which rule out "classical field theory" may therefore be interpreted as ruling out classical assumptions about measurement which were not part of the PDE. If quantum measurement rules can be derived as a consequence of quantum dynamics and gross thermodynamics, they should apply to a PDE model of reality just as much as they apply to a QFT model. This implies: (1) the real advantage of "quantum computing" lies in the exploitation of quantum measurement effects, which may have possibilities well beyond today's early efforts; (2) Lagrangian PDE models assuming the existence of objective reality should be reconsidered as a "theory of everything." This paper will review the underlying mathematics, prove the basic points, and suggest how a PDE-based approach might someday allow a finite, consistent unified field theory far simpler than superstring theory, the only known alternative to date.
The study of higher-order moments of a distribution and its cumulants constitute a sensitive tool to investigate the correlations between the particle produced in high-energy interactions. In our previous work, we have used the Tsallis q statistics, NBD, Gamma and shifted Gamma distributions to describe the multiplicity distributions in π−-nucleus and p-nucleus fixed target interactions at various energies ranging from PLab=27GeV to 800GeV. In this study, we have extended our analysis by calculating the moments using the Tsallis model at these fixed target experiment data. By using the Tsallis model, we have also calculated the average charged multiplicity and its dependence on energy. It is found that the average charged multiplicity and moments predicted by the Tsallis statistics are in much agreement with the experimental values and indicates the success of the Tsallis model on data from visual detectors. The study of moments also illustrates that KNO scaling hypothesis holds good at these energies.
Study of the characteristic properties of charged particle production in hadron–nucleus collisions at high energies by utilizing the approaches from different statistical models is performed. Predictions from different approaches using the negative binomial distribution, shifted Gompertz distribution, Weibull distribution and the Krasznovszky–Wagner distribution are considered for a comparative study of the relative success of these models. These distributions derived from a variety of functional forms are based on either phenomenological parameterizations or some model of the underlying dynamics. Some of these have also been used to study the data at the Large Hadron Collider (LHC) for both proton–proton and nucleus–nucleus collisions. Various physical and derived observables have been used for the analysis.
This paper generalizes the classical cubic spline with the construction of the cubic spline coalescence hidden variable fractal interpolation function (CHFIF) through its moments, i.e. its second derivative at the mesh points. The second derivative of a cubic spline CHFIF is a typical fractal function that is self-affine or non-self-affine depending on the parameters of the generalized iterated function system. The convergence results and effects of hidden variables are discussed for cubic spline CHFIFs.
The paper introduces a new failure time distribution called additive Teissier–Weibull distribution. Among its features, it is capable of modeling increasing and bathtub hazard rates. It is thus useful in modeling heterogeneous populations and can be viewed as the failure time model of the system having two modes of failures that follow the Teissier and Weibull distributions, respectively. Some of its important properties are obtained, such as quantiles, moments and shapes of the probability density and hazard functions. Four different methods of estimation such as the maximum likelihood, least squares, weighted least squares and maximum product spacing are proposed to estimate the unknown parameters. To compare the performance of the estimates, an extensive simulation study is carried out with varying sample sizes. Finally, failure times of primary reactor pumps and power generators are fitted and analyzed to show that the new additive hazard rate model may give a better fit than many other existing models.
In this work, we develop an efficient methodology for analyzing risk in the wealth balance (hedging error) distribution arising from a mean square optimal dynamic hedge on a European call option, where the underlying stock price process is modeled on a multinomial lattice. By exploiting structure in mean square optimal hedging problems, we show that moments of the resulting wealth balance may be computed directly and efficiently on the stock lattice through the backward iteration of a matrix. Based on this moment information, convex optimization techniques are then used to estimate the Value-at-Risk of the hedge. This methodology is applied to a numerical example where the Value-at-Risk is estimated for a hedged European call option on a stock modeled on a trinomial lattice.
We present a method of moments approach to pricing double barrier contracts when the underlying is modelled by a polynomial jump-diffusion. By general principles the price is linked to certain infinite dimensional linear programming problems. Subsequently approximating these by finite dimensional linear programming problems, upper and lower bounds for the prices of such options are found. We derive theoretical convergence results for this algorithm, and provide numerical illustrations by applying the method to the valuation of several double barrier-type contracts (double barrier knock-out call, American corridor and double-no-touch options) under a number of different models, also allowing for a deterministic short rate.
This paper provides an investigation of the effects of an investment’s return moments on drawdown-based measures of risk, including Maximum Drawdown (MDD), Conditional Drawdown (CDD), and Conditional Expected Drawdown (CED). Additionally, a new end-of-period drawdown measure is introduced, which incorporates a psychological aspect of risk perception that previous drawdown measures had been unable to capture. While simulation results indicate many similarities in the first and second moments, skewness and kurtosis affect different drawdown measures in radically different ways. Thus, users should assess whether their choice of drawdown measure accurately reflects the kind of risk they want to measure.
Let a0, a- and a+ be the preservation, annihilation, and creation operators of a probability measure μ on ℝd, respectively. The operators a0 and [a-, a+] are proven to uniquely determine the moments of μ. We discuss the question: "What conditions must two families of operators satisfy, in order to ensure the existence of a probability measure, having finite moments of any order, so that, its associated preservation operators and commutators between the annihilation and creation operators are the given families of operators?" For the case d = 1, a satisfactory answer to this question is obtained as a simple condition in terms of the Szegö–Jacobi parameters. For the multidimensional case, we give some necessary conditions for the answer to this question. We also give a table with the associated preservation and commutator between the annihilation and creation operators, for some of the classic probability measures on ℝ.
A method for computing the mixed moments of (not necessarily commutative) random vectors from the first-order moments, the q-commutators between the annihilation and creation operators, and the q-commutators between the annihilation and preservation operators, is presented. The method is illustrated by a relevant characterization of q-Gaussian vectors.
We investigate one-dimensional three-state quantum walks. We find a formula for the moments of the weak limit distribution via a vacuum expectation of powers of a self-adjoint operator. We use this formula to fully characterize the localization of three-state quantum walks in one dimension. The localization is also characterized by investing the eigenvectors of the evolution operator for the quantum walk. As a byproduct we clarify the concepts of localization differently used in the literature. We also study the continuous part of the limit distribution. For typical examples we show that the continuous part is the same kind as that of two-state quantum walks. We provide with explicit expressions for the density of the weak limits of some three-state quantum walks.
Cotangent sums are associated to the zeros of the Estermann zeta function. They have also proven to be of importance in the Nyman–Beurling criterion for the Riemann Hypothesis. The main result of the paper is the proof of the existence of a unique positive measure μ on ℝ, with respect to which certain normalized cotangent sums are equidistributed. Improvements as well as further generalizations of asymptotic formulas regarding the relevant cotangent sums are obtained. We also prove an asymptotic formula for a more general cotangent sum as well as asymptotic results for the moments of the cotangent sums under consideration. We also give an estimate for the rate of growth of the moments of order 2k, as a function of k.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.