Please login to be able to save your searches and receive alerts for new content matching your search criteria.
It is well known that every sequential element may become metastable when provided with marginal inputs, such as input transitions occurring too close or input voltage not reaching a defined HI or LO level. In this case the sequential element requires extra time to decide which digital output level to finally present, which is perceived as an output delay. The amount of this delay depends on how close the element’s state is to the balance point, at which the delay may, theoretically, become infinite. While metastability can be safely avoided within a closed timing domain, it cannot be completely ruled out at timing domain boundaries. Therefore it is important to quantify its effect. Traditionally this is done by means of a “mean time between upsets” (MTBU) which gives the expected interval between two metastable upsets. The latter is defined as the event of latching the still undecided output of one sequential element by a subsequent one. However, such a definition only makes sense in a time-safe environment like a synchronous design. In this paper we will extend the scope to so-called value-safe environments, in which a sequential element can safely finalize its decision, since the subsequent one waits for completion before capturing its output. Here metastability is not a matter of “failure” but a performance issue, and hence characterization by MTBU is not intuitive. Therefore we will put the focus on the delay aspect and derive a suitable model. This model extends existing approaches by also including the area of very weak metastability and thus providing complete coverage. We will show its validity through comparison with transistor-level simulation results for the most popular sequential elements in different implementations, point out its relation to the traditional MTBU model parameters, namely τ and T0, and show how to use it for calculating the performance penalty in a value-safe environment.
In this study, heat and moisture transfer model of an enthalpy exchanger is proposed. With separately measured sorption constant and diffusion coefficient, the model predicts the heat and moisture transfer effectiveness of an enthalpy exchanger. Two sample enthalpy exchangers were tested at a KS condition to verify the model. The model predicts the heat transfer effectiveness within 4%, and the moisture transfer effectiveness within 10%. Pressure drop is predicted within 6%. The spacer fin efficiency for heat transfer was 0.11 to 0.13. The fin efficiency for moisture transfer, however, was negligibly small. For heat transfer, the conduction resistance to total thermal resistance was less than 1%. For moisture transfer, however, membrane resistance was dominant to convective moisture transfer resistance.
0-3 dielectric composites with high dielectric constants have received great interest for various technological applications. Great achievements have been made in the development of high performance of 0-3 composites, which can be classified into dielectric–dielectric (DDCs) and conductor–dielectric composites (CDCs). However, predicting the dielectric properties of a composite is still a challenging problem of both theoretical and practical importance. Here, the physical aspects of 0-3 dielectric composites are reviewed. The limitation of current understanding and new developments in the physics of dielectric properties for dielectric composites are discussed. It is indicated that the current models cannot explain well the physical aspects for the dielectric properties of 0-3 dielectric composites. For the CDCs, experimental results show that there is a need to find new equations/models to predict the percolative behavior incorporating more parameters to describe the behavior of these materials. For the DDCs, it is indicated that the dielectric loss of each constituent has to be considered, and that it plays a critical role in the determination of the dielectric response of these types of composites. The differences in the loss of the constituents can result in a higher dielectric constant than both of the constituents combined, which breaks the Wiener limits.
The characteristics of the electromechanical response observed in an ionic-electroactive polymer (i-EAP) are represented by the time (t) dependence of its bending actuation (y). The electromechanical response of a typical i-EAP — poly(ethylene oxide) (PEO) doped with lithium perchlorate (LP) — is studied. The shortcomings of all existing models describing the electromechanical response obtained in i-EAPs are discussed. A more reasonable model: y=ymaxe−τ∕t is introduced to characterize this time dependence for all i-EAPs. The advantages and correctness of this model are confirmed using results obtained in PEO-LP actuators with different LP contents and at different temperatures. The applicability and universality of this model are validated using the reported results obtained from two different i-EAPs: one is Flemion and the other is polypyrrole actuators.
A model to describe the mechanism of conformational dynamics in secondary protein based on matter interactions is proposed. The approach deploys the lagrangian method by imposing certain symmetry breaking. The protein backbone is initially assumed to be nonlinear and represented by the Sine-Gordon equation, while the nonlinear external bosonic sources is represented by ϕ4 interaction. It is argued that the nonlinear source induces the folding pathway in a different way than the previous work with initially linear backbone. Also, the nonlinearity of protein backbone decreases the folding speed.
We study the behavior of the number of votes cast for different electoral subjects in majority elections, and in particular, the Albanian elections of the last 10 years, as well as the British, Russian, and Canadian elections. We report the frequency of obtaining a certain percentage (fraction) of votes versus this fraction for the parliamentary elections. In the distribution of votes cast in majority elections we identify two regimes. In the low percentiles we see a power law distribution, with exponent about -1.7. In the power law regime we find over 80% of the data points, while they relate to 20% of the votes cast. Votes of the small electoral subjects are found in this regime. The other regime includes percentiles above 20%, and has Gaussian distribution. It corresponds to large electoral subjects. A similar pattern is observed in other first past the post (FPP) elections, such as British and Canadian, but here the Gaussian is reduced to an exponential. Finally we show that this distribution can not be reproduced by a modified "word of mouth" model of opinion formation. This behavior can be reproduced by a model that comprises different number of zealots, as well as different campaign strengths for different electoral subjects, in presence of preferential attachment of voters to candidates.
A new approach to model the biomatter dynamics based on the field theory is presented. It is shown that some well known tools in field theory can be utilized to describe the physical phenomena in life matters, in particular at elementary biomatters like DNA and proteins. In this approach, the biomatter dynamics are represented as results of interactions among its elementary matters in the form of lagrangian. Starting from the lagrangian would provide stronger underlying theoretical consideration for further extension. Moreover, it also enables us to acquire rich physical observables using statistical mechanics instead of relying on the space-time dynamics from certain equation of motions which is not solvable due to its nonlinearities. Few examples from previous results are given and explained briefly.
The uncertainties in scientific studies for climate risk management can be investigated at three levels of complexity: “ABC”. The most sophisticated involves “Analyzing” the full range of uncertainty with large multi-model ensemble experiments. The simplest is about “Bounding” the uncertainty by defining only the upper and lower limits of the likely outcomes. The intermediate approach, “Crystallizing” the uncertainty, distills the full range to improve the computational efficiency of the “Analyze” approach. Modelers typically dictate the study design, with decision-makers then facing difficulties when interpreting the results of ensemble experiments. We assert that to make science more relevant to decision-making, we must begin by considering the applications of scientific outputs in facilitating decision-making pathways, particularly when managing extreme events. This requires working with practitioners from outset, thereby adding “D” for “Decision-centric” to the ABC framework.
We studied the relevance of the secular variation of Japanese interest in energy and environmental problems to the information primarily released by the news media. From the investigation of the extent of public interest in three matters, the global warming, the energy saving and nature, all indicated by opinion surveys, the number of newspaper articles and the frequency of Internet retrieval search, we proposed a model such that the public interest along with the acquired public knowledge were given as a function of public memory of the information primarily provided by the news media. The society was assumed here to be immersed in a virtual field of information environment, which induced the collective interest of the public and was proportional in strength to the extent of the public memory with oblivion. Introducing two types of oblivion function, we found the model to well reproduce the real time-variation of the Japanese interest, except for the case of nature, almost irrespective to the form of the function. Some comments were made on the attenuation of the public interest that occurred when the field became weakened.