This paper examines the effects of anticipated and unanticipated uncertainty on macroeconomic fluctuations and investigates the optimal macroeconomic policy portfolio. Our empirical findings indicate that both anticipated and unanticipated uncertainty shocks result in a reduction in gross corporate output, while the negative impact of the former is relatively smaller. By developing a multi-sector DSGE model, we further find that anticipated uncertainty shocks lead to a relatively smaller rise in the risk premium for corporate lending, and a smaller decrease of corporate lending, investment, and output. The policy analysis reveals that monetary policy can effectively dampen the fluctuations of macroeconomic variables like output, but its impact on financial variables such as risk premiums and asset prices is limited. On the other hand, macroprudential policy can directly tackle the root causes of uncertainty shocks and alleviate their adverse effects. The combination of these two policies facilitates a balance between the policy objectives of managing systemic risks and mitigating macroeconomic fluctuations.
This research examines the interval-valued availability and cost of a competing-risk system with dependent catastrophic and degradation failures incorporating uncertainty. Uncertainty indicates that the probability of the successful operation of the system is not precisely known. The considered system has three states: normal, degraded, and totally failed. While degradation failures lessen the system’s overall effectiveness and lead it to a degraded state, a catastrophic failure abruptly terminates the system’s operations and results in a totally failed state. The interrelationship between these two failures is illustrated by the fact that each degradation failure elevates the possibility of a catastrophic failure. To identify a failure, sequential inspections are performed on the system. If the system is found to be degraded, a minimal repair is executed. If a catastrophic failure is detected, a corrective repair is performed. By integrating the aforesaid points, a theorem describing the upper and lower limits of the model’s reliability is derived. Furthermore, some theorems defining the bounds of point availability, long-run availability, and the average long-run cost rate are established. A numerical example of an aluminum electrolytic capacitor is taken to demonstrate the results.
Should the agile approach or the stage-gate approach be used to manage the new product development process in a big data environment? Although the stage-gate approach has been the dominant approach to managing new product development, the agile approach literature suggests that the stage-gate approach experiences difficulties when dealing with response uncertainty. The agile approach facilitates information processing for customer feedback by developing and deploying minimum viable products. Conflicting views on the effectiveness of these two approaches centre on their capabilities of dealing with uncertainty through information processing. The diffusion of digital technologies offers new ways to process information, but not many studies have investigated how the effectiveness of the agile and stage-gate approaches varies with different information-processing capabilities in a data-rich environment. This study categorises information-processing strategies into those that use internal or external big data and explores how these strategies interact with the stage-gate and agile approaches. To verify the research framework, we conducted an on-site survey to collect data. The sample covers 261 new product development projects. We adopted a multivariable linear regression model for the analysis. The regression results show that the agile approach should be integrated with internal rather than external big data utilisation and that the stage-gate approach should be integrated with external rather than internal big data utilisation. This finding deepens our understanding of innovation process management by exploring the integrating effects of the two approaches with the big data utilisation strategy.
Since the advent of networked systems, fuzzy graph theory has surfaced as a fertile paradigm for handling uncertainties and ambiguities. Among the different modes of handling challenges created by the uncertainties and ambiguities of current networked systems, integrating fuzzy graph theory with cryptography has emerged as the most promising approach. In this regard, this review paper elaborates on potentially studying fuzzy graph-based cryptographic techniques, application perspectives, and future research directions. Since the expressive power of fuzzy graphs allows the cryptographic schemes to handle imprecise information and to enhance security in many domains, several domains have benefited, such as image encryption, key management, and attribute-based encryption. The paper analyzes in depth the research landscape, mainly by focusing on the varied techniques used, such as fuzzy logic for key generation and fuzzy attribute representation for access control policies. A comparison with performance metrics unveils the trade-offs and advantages of different fuzzy graph-based approaches in efficiency, security strength, and computational overhead. Additionally, the survey explores the security applications of fuzzy graph-based cryptography and underpins potential development for secure communication in wireless sensor networks, privacy-preserving data mining, fine-grained access control in cloud computing, and blockchain security. Some challenges and research directions, such as the standardization of fuzzy logic operators, algorithmic optimization, integration with emerging technologies, and exploitation of post-quantum cryptography applications, are also brought out. This review will thus bring insight into this interdisciplinary domain and stimulate further research for the design of more robust, adaptive, and secure cryptographic systems in the wake of rising complexities and uncertainties.
The tracking technique that is examined in this study considers the nanosensor’s velocity and distance as independent random variables with known probability density functions (PDFs). The nanosensor moves continuously in both directions from the starting point of the real line (the line’s origin). It oscillates while traveling through the origin (both left and right). We provide an analytical expression for the density of this distance using the Fourier-Laplace representation and a sequence of random points. We can take the tracking distance into account as a function of a discounted effort-reward parameter in order to account for this uncertainty. We provide an analytical demonstration of the effects this parameter has on reducing the expected value of the first collision time between a nanosensor and the particle and confirming the existence of this technique.
In this study, uncertainty quantification and parametric sensitivity analysis in Probabilistic Tsunami Hazard Assessment (PTHA) are performed for the South China Sea using a Monte Carlo approach. Uncertainties in parameters such as the magnitude–frequency distribution of the potential tsunami zone, geodetic information used to constrain the maximum magnitude, properties determining the slip distribution, scaling laws and dip of unit sources are considered, each of which is varied separately while the others are fixed. The Coefficient of Variation (COV) of tsunami amplitudes corresponding to a fixed annual probability at different coastal sites is used to represent the uncertainty of each parameter. In addition, the overall tsunami hazard and uncertainty are also presented by simultaneously varying all parameters in a Monte Carlo series. Our results suggest that a major contributor to the uncertainty is the magnitude frequency distribution parameters, especially the maximum magnitude. Geodetic information can be used to solve the problem that the maximum magnitude is underestimated owing to the scarcity of mega earthquakes in the historical earthquake catalog, while it will also lead to a large uncertainty if the parameters in it are not well determined. Among these geodetic parameters, the seismic coupling coefficient is the parameter that most influences the uncertainty as it is difficult to determine an accurate value. The effect of the slope parameter β in the magnitude–frequency relationship is complicated because of its influence in determining the maximum magnitude and the number of earthquake events. In addition, the uncertainty associated with the down-dip width of the seismogenic zone is not remarkable but cannot be ignored, which is similar to that of the annual plate slip rate, while the effect of the rigidity is relatively small because its effect on the average slip of earthquake events will reduce the uncertainty. Compared to the uncertainty in determining the maximum magnitude, the effect of slip distribution properties such as Hurst exponent, correlation length and scaling exponent and the choice of scaling laws is relatively small. The uncertainty in the dip angle of the unit source is moderate and cannot be ignored, especially for sites along the strike of the subduction zone. Tsunami curves for four coastal sites indicate that the tsunami hazard in the South China Sea is subject to large variations when considering all uncertainties in the PTHA, with the COV in 2000 years at different sites being about 0.25, which means that the 95th percentile is about two times of the 5th percentile.
We introduce and investigate a weighted propositional configuration logic over De Morgan algebras. This logic is able to describe software architectures with quantitative features especially the uncertainty of the interactions that occur in the architecture. We deal with the equivalence problem of formulas in our logic by showing that every formula can be written in a specific form. Surprisingly, there are formulas which are equivalent only over specific De Morgan algebras. We provide examples of formulas in our logic which describe well-known software architectures equipped with quantitative features such as the uncertainty and reliability of their interactions.
One of the missions of the cognitive process of animals, including humans, is to make reasonable judgments and decisions in the presence of uncertainty. The balance between exploration and exploitation investigated in the reinforcement-learning paradigm is one of the key factors in this process. Recently, following the pioneering work in behavioral economics, growing attention has been directed to human behaviors exhibiting deviations from the simple maximization of external reward. Here we study the dynamics of betting behavior in a simple game, where the probability of reward and the magnitude of reward are designed to give a "zero" expected net reward ("flat reward condition"). No matter how the subject behaves, there is on average no change in one's resources, and therefore every possible sequence of action has the same value. Even in such a situation, the subjects are found to behave not in a random manner, but in ways showing characteristic tendencies, reflecting the dynamics of brain's reward system. Our results suggest that brain's reward system is characterized by a rich and complex dynamics only loosely coupled with external reward structure.
Despite several automated strategies for identification/segmentation of Multiple Sclerosis (MS) lesions in Magnetic Resonance Imaging (MRI) being developed, they consistently fall short when compared to the performance of human experts. This emphasizes the unique skills and expertise of human professionals in dealing with the uncertainty resulting from the vagueness and variability of MS, the lack of specificity of MRI concerning MS, and the inherent instabilities of MRI. Physicians manage this uncertainty in part by relying on their radiological, clinical, and anatomical experience. We have developed an automated framework for identifying and segmenting MS lesions in MRI scans by introducing a novel approach to replicating human diagnosis, a significant advancement in the field. This framework has the potential to revolutionize the way MS lesions are identified and segmented, being based on three main concepts: (1) Modeling the uncertainty; (2) Use of separately trained Convolutional Neural Networks (CNNs) optimized for detecting lesions, also considering their context in the brain, and to ensure spatial continuity; (3) Implementing an ensemble classifier to combine information from these CNNs. The proposed framework has been trained, validated, and tested on a single MRI modality, the FLuid-Attenuated Inversion Recovery (FLAIR) of the MSSEG benchmark public data set containing annotated data from seven expert radiologists and one ground truth. The comparison with the ground truth and each of the seven human raters demonstrates that it operates similarly to human raters. At the same time, the proposed model demonstrates more stability, effectiveness and robustness to biases than any other state-of-the-art model though using just the FLAIR modality.
Many countries have promoted the multilateral commerce in the global market according to the WTO/TBT (Agreement of Technical Barrier to Trade). One of the most important factors to implement this agreement is that the trade goods should be evaluated once in the export or import country. Therefore the test laboratories for product assessment should construct the new management systems to get the same results of tests anywhere in the world. Even the research laboratories are in the same situation as well as the test laboratories because they must compare their data with others for joint study. They should be encouraged to apply the requirements of ISO/IEC 17025 to their systems.
The inclusion of metamorphic buffer layers (MBL) in the design of lattice-mismatched semiconductor heterostructures is important in enhancing reliability and performance of optical and electronic devices. These metamorphic buffer layers usually employ linear grading of composition, and materials including InxGa1-xAs and GaAs1-yPy have been used. Non-uniform and continuously graded profiles are beneficial for the design of partially-relaxed buffer layers because they reduce the threading dislocation density by allowing the distribution of the misfit dislocations throughout the metamorphic buffer layer, rather than concentrating them at the interface where substrate defects and tangling can pin dislocations or otherwise reduce their mobility as in the case of uniform compositional growth. In this work we considered heterostructures involving a linearly-graded (type A) or step-graded (type B) buffer layer grown on a GaAs (001) substrate. For each structure type we present minimum energy calculations and compare the cases of cation (Group III) and anion (Group V) grading. In addition, we studied the (i) average and surface in-plane strain and (ii) average misfit dislocation density for heterostructures with various thickness and compositional profile. Moreover, we show that differences in the elastic stiffness constants give rise to significantly different behavior in these two commonly-used buffer layer systems.
We have analyzed the strain resolution of x-ray rocking curve profiles from measurements of the peak position and peak width made with finite counting statistics. In this work, we have considered x-ray rocking curves which may be Gaussian or Lorentzian in character and have analyzed the influence of the effective number of counts, full-width-at-half-maximum (FWHM) and the Bragg angle on the resolution. Often experimental resolution values are estimated on the order of 10−5 whereas this work predicts more sensitive values (10−9) with smaller FWHM and larger effective counts and Bragg angles.
In this empirical paper we show that in the months following a crash there is a distinct connection between the fall of stock prices and the increase in the range of interest rates for a sample of bonds. This variable, which is often referred to as the interest rate spread variable, can be considered as a statistical measure for the disparity in lenders' opinions about the future; in other words, it provides an operational definition of the uncertainty faced by economic agents. The observation that there is a strong negative correlation between stock prices and the spread variable relies on the examination of eight major crashes in the United States between 1857 and 1987. That relationship which has remained valid for one and a half century in spite of important changes in the organization of financial markets can be of interest in the perspective of Monte Carlo simulations of stock markets.
In this paper, we introduce the concept of opinion entropy based on Shannon entropy, which is used to describe the uncertainty of opinions. With opinion entropy, we further present a public opinion formation model, and simulate the process of public opinion formation under various controlled conditions. Simulation results on the Holme–Kim network show that the opinion entropy will reduce to zero, and all individuals will hold the opinion of agreeing with the topic, only by adjusting the cons' opinions with a high control intensity. Controlling the individuals with big degree can bring down the opinion entropy in a short time. Besides, extremists do not easily change their opinion entropy. Compared with previous opinion clusters, opinion entropy provides a quantitative measurement for the uncertainty of opinions. Moreover, the model can be helpful for understanding the dynamics of opinion entropy, and controlling the public opinion.
The present paper maps the records of urban taxi trips into dynamic networks, where nodes are the communities and links represent the recorded taxi trips between them. The dynamic urban taxi trip networks, where nodes are the communities and links represent the recorded taxi trips between them, are formulated here as a special type of large-scale traffic system with an enormous impact on the city, in which the existence of uncertainties together with the spatial and temporal variation in the distribution of the taxi trips are considered. Three types of indicators are proposed to facilitate the measurement of the activities between and inside the communities (nodes of the network) from qualitative and quantitative perspectives. It could be found from the analysis of the records within the New York city that these indicators are inconsistent to each other, and nevertheless, none of them distributes uniformly within the city but generally follows the power law in spite of their time-dependent properties. Further, the unusually low values of the scaling parameters from the curve fitting with power law for all the proposed indicators illustrate the severe inhomogeneity of the networks (also the city).
It is shown that the concept of a Universal Computer cannot be realized. Specifically, instances of a computable function are exhibited that cannot be computed on any machine
that is capable of only a finite and fixed number of operations per step. This remains true even if the machine
is endowed with an infinite memory and the ability to communicate with the outside world while it is attempting to compute
. It also remains true if, in addition,
is given an indefinite amount of time to compute
. This result applies not only to idealized models of computation, such as the Turing Machine and the like, but also to all known general-purpose computers, including existing conventional computers (both sequential and parallel), as well as contemplated unconventional ones such as biological and quantum computers. Even accelerating machines (that is, machines that increase their speed at every step) cannot be universal.
The relation between scatters of time and energy is well known in Classical and Quantum Physics, but the interpretations of such relation are not well understood. What is the meaning of a time scatter in a measurement? What is the meaning of the correlated energy gap? Why should they combine to give a value greater or equal to a theoretical lower bound? In this paper, we discuss uncertainty in measurement from the computational point of view and introduce the archetype of a generic stochastic oracle to a Turing machine.
This paper examines the effects of uncertainty on an individual's own contribution to the provision of the collective good using an impure public good model. Two types of uncertainty analyzing free-riding behavior are evaluated: (i) uncertainty surrounding the contributions of others to the public characteristic and (ii) uncertainty surrounding the response of others to an individual's own contribution. We extend previous studies by examining both the compensated and uncompensated effects of increases in such risks on the provision of the collective good. We also establish the conditions that are sufficient to determine both compensated and the total, uncompensated effects of an increase in risk on the voluntary provision of the collective good.
The precautionary principle was included in 1992 in the Rio Declaration and is part of important international agreements such as the Convention on Biological Diversity. Yet, it is not a straight-forward guide for environmental policy because many interpretations are possible as shown in this paper. Its different economic versions can result in conflicting policy recommendations about resource conservation. The principle does not always favor (natural) resource conservation (e.g., biodiversity conservation) although it has been adopted politically on the assumption it does. The principle's consequences are explored for biodiversity conservation when the introduction of new genotypes is possible.
This paper studies the relationship between monthly economic uncertainty of 20 advanced and emerging markets, and two daily covariates, i.e., exchange rate and stock index, with particular emphasis on the relationship between the variables in response to the Brexit vote. We use a functional data approach supplemented with a point of impact structure to conduct a mixed-frequency analysis. We find that incorporating the point of impact, in this case the Brexit shock, is marginally important relative to models that ignore it. We also find that the exchange rate played a more important role than the equity market in transmitting the Brexit shock to cause heightened uncertainty in the 20 countries considered. Our results have important policy implications.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.