Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Indistinguishable Obfuscation(IO) is a very critical cryptography primitive. It can be used to construct almost any other cryptography schemes. AB15 indistinguishability obfuscation scheme is the first practical scheme. In order to avoid the ideal generator can be searched, authors of AB15 IO scheme set it to a relatively large size (In fact, such ideal generator should be a “small prime”). It causes the ratio of the standard deviation of Gaussian sampling and the size of ideal generator smaller. We call the ratio as “deviation ratio”.
In this paper, we point that direct sampling and smaller deviation ratio makes the scheme insecure. A searching attack can reveal hidden coefficients of original boolean circuit from its obfuscator. Firstly, the attacker searches those level-0 encodings of 0, hoping that the code value is 0. Such event happens with probability 1√2πσ∗ or 1√4πσ∗, where σ∗ as the deviation ratio. By a series of searching, a weak obfuscator with prime module can be obtained. Then, the attacker can obtain all hidden coefficients of original boolean circuit. Take n as the dimension of independent variable, m as the dimension of coefficient of original boolean circuit. For Simple Obfuscator, searching time is 2n+2m, and the probability that one successfully reveals hidden coefficients of original boolean circuit is about n+m2√4πσ∗(1+√2). For Robust Obfuscator, searching time and success probability are respectively n+m+C2n+m and n+m2√2πσ∗+C2n+m2(√4πσ∗)2. At last, we present a revised scheme with “conditionally accepted sampling” and smaller deviation ratio. The new scheme avoids above mentioned attack and greatly reduce the size complexity.
The rapid development of high-speed railways has brought significant attention to the coupled vibration issues of the train–bridge system. Vertical track irregularities, as one of the primary excitation sources of train–bridge system vibrations, is essentially a stochastic process. In this study, a beam model considering spatial vibrations is established to analyze the dynamic response under the passage of a train. Statistical methods, Laplace transform and Duhamel’s integral technique, are employed to derive the mean square value (MSV) and autocorrelation function (ACF) of the train’s vertical response, as well as the average and standard deviation (SD) of the beam’s bending–torsional coupled vibration response. The accuracy of the proposed analytical approach was verified using the Newmark-β algorithm and Monte Carlo simulation. In numerical examples, the displacement response of the beam is mainly explored for the influence of train speed and beam span. The results indicate that the maximum SD of displacement at mid-span occurs when the train is about to leave, which diminishes with higher train speeds and shorter beam spans. For a speed ranging from 250km/h to 350km/h, an increase in speed leads to a noticeable rise in the peak value of vertical average displacement. Due to high-frequency excitation causing the attenuation of low-frequency structural response, the MSV of the train’s displacement decreases with increasing speed.
The disappearance and reappearance of chaos by adjusting the internal parameters of dynamics in Lorenz system are studied. We observe monotonous and periodic time-dependent changes of Rayleigh number. There exists relaxation time for the disappearance of chaos, when we use the snapshot attractors to observe the change of the system attractors. We show that the rate of disappearance and reappearance of chaos is positively correlated with the control parameters. To reflect the relaxation phenomenon of chaotic disappearance and the sensitivity of trajectory, the concept of finite-time Lyapunov exponent is used. Then the statistical characteristics of the system can be presented by standard deviation. The chaotic disappearance and reappearance are manifested in the decrease and increase of the standard deviation. The standard deviation decreases continuously during chaotic disappearance, but increases discontinuously during chaotic reappearance. A distinctive scenario is that no matter which parameter changes, when we use the same rate of change in the process of chaotic disappearance and reappearance, their paths are different.
As a function of the barrier height, center-of-mass energy, Coulomb interaction parameter and Z2/A, a new empirical formula for the astronomical S-factor has been proposed. Around 14 fusion reactions using 46,48,50Ti as projectiles were taken into consideration for various targets, producing compound nuclei with atomic and mass number ranges of 44≤Z≤104 and 92≤A≤258, respectively. The geometric factor, Gamow–Sommerfield factor and empirical S-factor formulas have been used to determine the fusion cross-sections. Compared to Wong’s formula, this study shows a better agreement with the available experiments except for the 48Ti+ 122Sn and 48Ti+ 166Er fusion reactions. When employed to correlate with the experimental data on titanium-induced fusion reactions, this study yields a lower standard deviation value than Wong’s formula except for the fusion reactions leading to the formation of compound nuclei 170Hf and 214Th.
The spontaneous fission (SF) half-lives of all 96 experimentally accessible SF emitters were examined. The SF emitters are classified as even Z-even A, odd Z-even A, odd Z-odd A and even Z-odd A. A plot of logT1∕2+kδm versus Z2∕A demonstrates a straight line, where k is variable. The logT1∕2 values produced in this study are compared to all other semi-empirical relations such as Ren et al., [Nucl. Phys. A 759, 64 (2005)], Xu et al., [Phys. Rev. C 78, 044329 (2008)], Santhosh et al., [Nucl. Phys. A 832, 220 (2010)] and Karpov et al., [Int. J. Mod. Phys. E 21, 1250013 (2012)] available in the literature. When compared to earlier semi-empirical equations accessible in the literature, the new formula shows reduced uncertainty in standard deviation in the case of Odd Z-even A and even Z-Odd A nuclei. When compared to other sets of combinations (even Z-even A, and odd Z-odd A nuclei), this study more accurately reproduces the experimental SF half-lives for even Z-odd A and odd Z-even A nuclei with smaller σ in the atomic number range 90≤Z≤112 and mass number range 232≤A≤284.
Medical image fusion is the process of deriving vital information from multimodality medical images. Some important applications of image fusion are medical imaging, remote control sensing, personal computer vision and robotics. For medical diagnosis, computerized tomography (CT) gives the best information about denser tissue with a lesser amount of distortion and magnetic resonance image (MRI) gives the better information on soft tissue with little higher distortion. The main scheme is to combine CT and MRI images for getting most significant information. The need is to focus on less power consumption and less occupational area in the implementations of the applications involving image fusion using discrete wavelet transform (DWT). To design the DWT processor with low power and area, a low power multiplier and shifter are incorporated in the hardware. This low power DWT improves the spatial resolution of fused image and also preserve the color appearance. Also, the adaptation of the lifting scheme in the 2D DWT process further improves the power reduction. In order to implement this 2D DWT processor in field-programmable gate array (FPGA) architecture as a very large scale integration (VLSI)-based design, the process is simulated with Xilinx 14.1 tools and also using MATLAB. When comparing the performance of this low power DWT and other available methods, this high performance processor has 24%, 54% and 53% of improvements on the parameters like standard deviation (SD), root mean square error (RMSE) and entropy. Thus, we are obtaining a low power, low area and good performance FPGA architecture suited for VLSI, for extracting the needed information from multimodality medical images with image fusion.
In this paper, the alpha decay process is investigated through the theoretical approaches for spherical Bismuth (Bi) isotopes in the range 187 ≤ A ≤ 214. The results are compared with the experimental data for isotopes of Bi with the modified Coulomb and proximity potential model (MCPPM). We analyze the systematics of alpha decay half-life (HL) of Bi isotopes versus the decay energy and the total α-kinetic energy. The results and their systematics are compared with the available experimental data and with those data obtained from empirical models as the Viola-Seaborg (VS) formula, Royer (R) and the two versions of modified Brown (mB) empirical formulas. The computed half-lives (HLs) are compared with the experimental data and also with the existing empirical estimates and are found in good agreement.
The half-life of a parent nucleus of Astatine isotopes 191≤=A≤=219 decaying via alpha emission is investigated by employing Coulomb and proximity potential model (CPPM) using the WKB barrier penetration probability and other different analytical and semiempirical formulae of Royer, AKRE, Akrawy, RoyerB, MRoyerB, MRenB, SemFIS, VS and SLB. In the calculation of Alpha decay (AD) half-life the available experimental and theoretical Q-values with the total alpha kinetic energy have been considered. The behavior of hindrance factor with the variation of mass numbers of parent nuclei for isotopes in the range 191≤=A≤=219 and the effect of magic number at closed shells were investigated. Through the comparison of obtained results from the systematics with the experimental data, the prediction of SemFIS formula was the best among the studied ones where it shows the minimum standard deviation of 0.829881.
Many microscopic and macroscopic models are available in the literature to study the cluster radioactivity. The investigation of appropriate theoretical model to study the cluster decay process is an important aspect. We studied cluster-decay in the atomic and mass number range 87≤Z≤96 and 221≤A≤242 using modified generalized liquid drop model (MGLDM), Coulomb and proximity potential model (CPPM) and generalized liquid drop model (GLDM). The results obtained using macroscopic models are compared with that of microscopic models. There are various mass excess tables available in the literature. But indentification of suitable mass excess table for cluster radioactivity is an also important part of this study. The macroscopic models, such as MGLDM, CPPM and GLDM, were used to analyze cluster-decay half-lives. Along with this, microscopic and semi-empirical relations were also investigated. The standard deviation is evaluated in macroscopic, microscopic and semi-empirical formulae. A detailed investigation shows that among macroscopic model-MGLDM, macroscopic-RMFM and in case of semi-empirical formulae, AZF produces less deviation. Hence, these results give an insight into prediction of cluster decay half-lives in heavy and superheavy region for unknown nuclei.
In this study, efforts were made to propose a semi-empirical equation for electron capture within the atomic number range 7≤Z≤103 and the mass number range of 12≤A≤252. About 753 nuclei were considered for the same. These nuclei were categorized into four groups based on the parities of both protons and neutrons: even(Z)–even(N), even(Z)–odd(N), odd(Z)–even(N) and odd(Z)–odd(N). Comparative analysis was carried out between the formula’s predictions and experimental data. The improved semi-empirical formula for electron capture belongs to the first category, requiring only the daughter nucleus’s atomic number and decay energy, making them a valuable tool for predicting electron capture half-lives. The present formula effectively describes even-Z-even-N cases. However, significant discrepancies occur when either Z or N is odd in nuclei, indicating that further refinement is necessary to improve.
We study the game theory from tick data of the won-dollar and yen-dollar exchange rates in financial markets. The standard deviation, the global efficiency, and the autocorrelation for arbitrary strategies are shown to give rise to properties of new dynamics, and these statistical quantities are very similar with the case of the majority game. Our results presented will be compared with numerical findings for other game models.
To model a given time series F(t) with fractal Brownian motions (fBms), it is necessary to have appropriate error assessment for related quantities. Usually the fractal dimension D is derived from the Hurst exponent H via the relation D = 2-H, and the Hurst exponent can be evaluated by analyzing the dependence of the rescaled range 〈|F(t + τ) - F(t)|〉 on the time span τ. For fBms, the error of the rescaled range not only depends on data sampling but also varies with H due to the presence of long term memory. This error for a given time series then can not be assessed without knowing the fractal dimension. We carry out extensive numerical simulations to explore the error of rescaled range of fBms and find that for 0 < H < 0.5, |F(t + τ) - F(t)| can be treated as independent for time spans without overlap; for 0.5 < H < 1, the long term memory makes |F(t + τ) - F(t)| correlated and an approximate method is given to evaluate the error of 〈|F(t + τ) - F(t)|〉. The error and fractal dimension can then be determined self-consistently in the modeling of a time series with fBms.
Optimization is an important and decisive task in science. Many optimization problems in science are naturally too complicated and difficult to be modeled and solved by the conventional optimization methods such as mathematical programming problem solvers. Meta-heuristic algorithms that are inspired by nature have started a new era in computing theory to solve the optimization problems. The paper seeks to find an optimization algorithm that learns the expected quality of different places gradually and adapts its exploration-exploitation dilemma to the location of an individual. Using birds’ classical conditioning learning behavior, in this paper, a new particle swarm optimization algorithm has been introduced where particles can learn to perform a natural conditioning behavior towards an unconditioned stimulus. Particles are divided into multiple categories in the problem space and if any of them finds the diversity of its category to be low, it will try to go towards its best personal experience. But if the diversity among the particles of its category is high, it will try to be inclined to the global optimum of its category. We have also used the idea of birds’ sensitivity to the space in which they fly and we have tried to move the particles more quickly in improper spaces so that they would depart these spaces as fast as possible. On the contrary, we reduced the particles’ speed in valuable spaces in order to let them explore those places more. In the initial population, the algorithm has used the instinctive behavior of birds to provide a population based on the particles’ merits. The proposed method has been implemented in MATLAB and the results have been divided into several subpopulations or parts. The proposed method has been compared to the state-of-the-art methods. It has been shown that the proposed method is a consistent algorithm for solving the static optimization problems.
In real life, multiple attribute decision problems (MADM) can be applied in different areas and numerous related extensions and methodologies have been proposed by researchers. Combining three-way TOPSIS decision ideas with MADM is a feasible and meaningful research direction. In light of this, this paper generalizes the classical TOPSIS method with the help of mean and standard deviation and proposes the so-called modified three-way TOPSIS. First, using a pair of thresholds which is derived by mean and standard deviation, we divide decision alternatives into three segments, and then a preliminary rank results of decision alternatives can be obtained. Furthermore, in each decision region, we use two ranking regulations (one-way TOPSIS or modified two-way TOPSIS method) to rank decision alternatives. A practical example of urban expressway route selection illustrates the feasibility of the proposed method. Finally, we test the feasibility and validity of the modified three-way TOPSIS method by comparing with some existing method.
In this paper, a Co finite element has been employed for deriving an eigenvalue problem using higher order shear deformation theory. The uncertain material and geometric properties are modeled as basic random variables. A mean-centered first order perturbation technique is used to find the mean and standard derivation of the buckling temperature of laminated composite plates — subjected to a uniform temperature rise — with random material and geometric properties. The effects of the modulus ratio, fiber orientation, length-to-thickness ratio, aspect ratio and various boundary conditions on the critical temperature are examined. It is found that small variations in material and geometric properties of the composite plate significantly affect the buckling temperature of the laminated composite plate. The results have been validated with independent Monte Carlo simulation and those available in the literature.
This paper proposes a novel method for estimating the evolutionary power spectral density (EPSD) of a nonstationary process based on a single sample. In the proposed method, a sample of a nonstationary process is decomposed into several components with a new binomial fitting decomposition (BFD). The EPSD of each component can be estimated using a newly proposed time-varying standard deviation estimation method and short-time Thomson multiple-window spectrum estimation method. The EPSD of the analyzed nonstationary sample is obtained by combining the EPSDs of all components. Via a comprehensive numerical study, the applicability of the proposed EPSD estimation method (for estimating the EPSD of a nonstationary process) is analyzed and compared with those by the Priestley method and wavelet-based method. The numerical results indicate that the estimated EPSD by the proposed method is more consistent with the corresponding theoretical one than those by the other two methods. Finally, the EPSDs of Storm Ampil, measured atop the Shanghai World Financial Center, are analyzed by the proposed method.
The work introduces a new 1/f noise theory that focuses on limited signals. Usually, 1/f noise represents drift, because 1/f noise is the spectral power density of the drift. The subjects of the new theory are signals that have limited value and duration. Therefore the basis of the new theory of 1/f noise corresponds to the real properties of any device and signal comprising 1/f noise. On this basis, the standard deviation of 1/f noise was derived, which is the most important parameter of 1/f noise. This standard deviation has good consistency with (a) the widely used Hurst approximation, (b) the square root of time dependence of Brownian motion displacement, and (c) theoretically derived Brownian displacement. Comparison of an existing 1/f noise theory and the new one shows that both theories are incompatible because subjects of the existing 1/f noise theory are limitless signals, which do not exist.
In this paper, image fusion method based on a new class of wavelet — non-separable wavelet with compactly supported, linear phase, orthogonal and dilation matrix is presented. We first construct a non-separable wavelet filter bank. Using these filters, the images involved are decomposed into wavelet pyramids. Then the following fusion algorithm was proposed: for low-frequency part, the average value is selected for new pixel value, For the three high-frequency parts of each level, the standard deviation of each image patch over 3×3 window in the high-frequency sub-images is computed as activity measurement. If the standard deviation of the area 3×3 window is bigger than the standard deviation of the corresponding 3×3 window in the other high-frequency sub-image. The center pixel values of the area window that the weighted area energy is bigger are selected. Otherwise the weighted value of the pixel is computed. Then a new fused image is reconstructed. The performance of the method is evaluated using the entropy, cross-entropy, fusion symmetry, root mean square error and peak-to-peak signal-to-noise ratio. The experiment results show that the non-separable wavelet fusion method proposed in this paper is very close to the performance of the Haar separable wavelet fusion method.
The symmetry triangular fuzzy number has been developed to build fuzzy autoregressive models by using various approaches such as low-high data, integer number, measurement error, and standard deviation data. However, most of these approaches are not simulated and compared between ordinary least square and fuzzy optimization in parameter estimation. In this paper, we are interested in implementation of measurement error and standard deviation data in construction symmetry triangular fuzzy numbers. Additionally, both types of triangular fuzzy numbers are deployed to build a fuzzy autoregressive model, especially the second order. The simulation result showed that the fuzzy autoregressive model produced the smaller mean square error and average width if compared with the ordinary autoregressive model. In the implementation, the high accuracy was also achieved by the fuzzy autoregressive model in consumer goods stock prediction. From the simulation and implementation, the proposed fuzzy autoregressive model is a competent approach for upper and lower forecasts.
This paper is concerned with dynamical systems of the form (X,f), where X is a bounded interval and f comes from a class of measure-preserving, piecewise linear transformations on X. If A⊆X is a Borel set and x∈A, the Poincaré recurrence time of x relative to A is defined to be the minimum of {n:n∈ℕandfn(x)∈A}, if the minimum exists, and ∞ otherwise. The mean of the recurrence time is finite and is given by Kac’s recurrence formula. In general, the standard deviation of the recurrence times need not be finite but, for the systems considered here, a bound for the standard deviation is derived.