Zika is a mosquito-transmitted viral disease which may spread directly by the vector or sexual transmission. Zika virus may persist in semen and urine for a long time after disappearance from the blood, those persons are known as the convalescent humans. It can also be transmitted vertically among mosquitoes. In this paper, we have considered an eight-compartment Zika model to study the effect of all the said aspects on the virus dynamics in deterministic as well as stochastic environment. In analytic part, we have computed basic reproduction number and discussed stability of different equilibria. We have shown the proposed model undergoes through transcritical bifurcation when the reproduction number is unity and validate the model with real infection data of Dominican Republic in 2016. To study the model in stochastic environments, the additive noise is taken into consideration which is formulated considering the standard Brownian white noises proportional to each class. We have obtained the condition for disease extinction and persistence in mean. All theoretical findings are justified by numerical simulations. Lastly, the paper is ended with some conclusions.
With the rapid development of modeling and simulation, there has been a growing concern about the credibility of complex simulations. To validate complex simulation models or systems with dynamic and correlated outputs, an intelligent simulation result validation method based on graph neural network (GNN) is presented. A framework for simulation result validation is proposed, illustrating the process divided into three parts: graph structure modeling for validation data, feature extraction based on graph representation learning (GRL), and bayes factor (BF)-based model’s credibility assessment. A graph structure modeling method is introduced to provide a predefined graph structure for subsequent GRL primarily. Next, the interdependencies and dynamic evolutionary patterns among variables are captured by a Multi-level Feature Extraction-based Graph Representation Learning (MFEGRL) model. The similarity of the graph representations is then compared based on the BF to determine the model’s credibility. Finally, the effectiveness of this method is demonstrated through a case study focusing on validating simulation models about the terminal guidance stage of a flight vehicle.
Rarefied isothermal gaseous flow through long diverging micro and nanochannels is investigated in this paper using the two-relaxation-time (TRT) lattice Boltzmann method (LBM). The simulations are performed over a wide range of Knudsen number, pressure ratio, and divergence angle. The Bounce-Back Specular Reflection (BSR) slip boundary condition is applied and is connected to the second-order slip boundary condition coefficients by means of the antisymmetric relaxation time and the bounce-back portion parameter. The effects of the slip coefficients on the wall and centerline Mach numbers, as well as the mass flow rates, are investigated. The numerical results are validated with those of the direct simulation Monte Carlo (DSMC) reported in the literature. The results show that the local pressure distributions are almost independent of the slip coefficients with excellent agreements with DSMC over a wide range of the divergence angle. Our results demonstrate that there is a specific divergence angle at each pressure ratio where the local unbounded Knudsen and, as a result, Mach numbers remain constant along the channel. This observation is almost independent of the slip coefficients, and the underlying reason is that the pressure drop is compensated by an increase in the channel area.
Robot arm positioning is an important factor in the robotic process. However, the robot manipulator experiences positioning inaccuracies. This positioning error is due to the dynamic inefficiencies of its actuator: DC servomotor. In a bid to resolve this actuator problem, an electro-rheological (ER) clutch-brake mechanism is employed. This clutch-brake mechanism can actuate and halt the motion of the robot arm. This rotary mechanism consists of two similar clutches that are driven to rotate in the opposite directions and an individual ER brake that provides braking torques to halt the manipulator at the required positions. The main aim of this paper is to establish a control strategy for the ER actuated robot arm by means of model validation with the experimental results. This study is conducted to understand the ER robotic positioning control for future applications.
The purpose of model validation is to test the goodness of fit of the identified model and to check whether the model is an appropriate representation of the underlying system. Model validation is a final and essential stage in most system identification procedures. Although model validation for nonlinear temporal systems has been extensively studied, model validation for spatiotemporal systems is still an open question. In this paper, correlation based methods, which have been successfully applied in nonlinear temporal systems are extended and enhanced to validate models of spatiotemporal systems. New nonlinear correlation functions are constructed based on the inputs, one-step-ahead predicted outputs and the residuals to test the dependency between the residuals and inputs/outputs. Examples are included to demonstrate the application of the tests.
We address general approaches to the rational selection and validation of mathematical and computational models of tumor growth using methods of Bayesian inference. The model classes are derived from a general diffuse-interface, continuum mixture theory and focus on mass conservation of mixtures with up to four species. Synthetic data are generated using higher-order base models. We discuss general approaches to model calibration, validation, plausibility, and selection based on Bayesian-based methods, information theory, and maximum information entropy. We also address computational issues and provide numerical experiments based on Markov chain Monte Carlo algorithms and high performance computing implementations.
In this paper we focus on the transient behavior of a class of nonlinear differential systems. We describe the succession with respect to time of the state variables extrema. This analysis is mainly independent of the analytical formulation of the model. The possible theoretical sequences of extrema are represented on a graph. They can be compared with experimental sequences of extrema, in order to validate the model qualitatively. An application to the Droop model and to a N-P-Z model with experimental data illustrates the method.
An aluminum foam/polyurethane interpenetrating phase composite damper (AF/ PUCD) can perform multi-functional energy dissipation to address different seismic hazards due to its multiphase hysteretic behavior. To improve the design of AF/PUCDs in engineering structures, a highly effective model based on the real deformation of an AF/PUCD is needed to describe its multiphase hysteretic behavior. In this paper, a novel viscoelastic-friction model composed of a viscoelastic component and a friction component is constructed. The hysteretic responses in each phase under various external excitations are described through the different combinations of the viscoelastic component and friction component. The unknown model parameters are identified through the Universal Global Algorithm (UGO). The model results are compared with the experimental results and the results from the Modified Bouc–Wen model and Optimum model. The comparative results show that the viscoelastic-friction model has a higher accuracy in capturing the multiphase hysteresis of AF/PUCD and predicting the boundary of each phase when the AF/PUCD is subjected to various cyclic excitations. Therefore, the viscoelastic-friction model is a good candidate for the design of AF/PUCDs applied in vibration control structures.
This paper validates a constitutive model for human intervertebral disc annulus fibrosus via numerical simulations on a lumber spine motion segment. This anisotropic hyperelastic fiber-reinforced constitutive model was previously developed by the authors. Based on three-dimensional (3D) lumbar spine segments that are constructed from CT scanning images, a detailed and anatomically accurate human lumbar spine finite element (FE) model for L3–L4 motion segment is developed. The FE model includes vertebral bodies, intervertebral disc, and various ligaments. Numerical simulations are carried out by using commercial CAE software package ABAQUS/Standard. The loading cases considered in the numerical analysis are set to be consistent with sets-up of cadaveric specimen testing available in the literature. Numerical results such as load–displacement curves and nucleus pressure are compared with experimental data. Simulation results show good consistency with cadaveric experimental data, and have good biomechanical fidelity. The constitutive model can be used for human intervertebral disc modeling and biomechanical analysis of human spine column.
Human liver biomechanical responses associated with frontal impacts, lateral impacts were studied using a simplified Chinese human body Finite Element Model (FEM) with more geometrical-accurate liver model for an average Chinese adult male from high resolution CT data. The developed model in this paper was composed by geometrically detailed liver model, simplified models of thoracic-abdominal organs, and the human skeleton model. Then, the whole model was validated at various velocities by comparing simulation outcomes with Post Mortem Human Subjects (PMHS) experimental results in frontal and lateral pendulum impacts. The force–deflection and force–time characteristics were in good agreement with the test results. The validated model was then applied for studying liver dynamic responses and injuries in simulations. Pressure, tensile stress and peak strain that may induce hepatic injuries was computed from model simulations and were analyzed about the correlation with the global parameters, like thoracic deflection, viscous criterion value, contact force. This study demonstrated that the method of developing a simplified finite element thorax-abdomen model with detailed liver model could be effective of hepatic injury assessment in various impacts reported in literature.
The process of human blood clotting involves a complex interaction of continuous-time/continuous-state processes and discrete-event/discrete-state phenomena, where the former comprise the various chemical rate equations and the latter comprise both threshold-limited behaviors and binary states (presence/absence of a chemical). Whereas previous blood-clotting models used only continuous dynamics and perforce addressed only portions of the coagulation cascade, we capture both continuous and discrete aspects by modeling it as a hybrid dynamical system. The model was implemented as a hybrid Petri net, a graphical modeling language that extends ordinary Petri nets to cover continuous quantities and continuous-time flows. The primary focus is simulation: (1) fidelity to the clinical data in terms of clotting-factor concentrations and elapsed time; (2) reproduction of known clotting pathologies; and (3) fine-grained predictions which may be used to refine clinical understanding of blood clotting. Next we examine sensitivity to rate-constant perturbation. Finally, we propose a method for titrating between reliance on the model and on prior clinical knowledge. For simplicity, we confine these last two analyses to a critical purely-continuous subsystem of the model.
Genome-scale metabolic models are a powerful tool to study the inner workings of biological systems and to guide applications. The advent of cheap sequencing has brought the opportunity to create metabolic maps of biotechnologically interesting organisms. While this drives the development of new methods and automatic tools, network reconstruction remains a time-consuming process where extensive manual curation is required. This curation introduces specific knowledge about the modeled organism, either explicitly in the form of molecular processes, or indirectly in the form of annotations of the model elements. Paradoxically, this knowledge is usually lost when reconstruction of a different organism is started. We introduce the Pantograph method for metabolic model reconstruction. This method combines a template reaction knowledge base, orthology mappings between two organisms, and experimental phenotypic evidence, to build a genome-scale metabolic model for a target organism. Our method infers implicit knowledge from annotations in the template, and rewrites these inferences to include them in the resulting model of the target organism. The generated model is well suited for manual curation. Scripts for evaluating the model with respect to experimental data are automatically generated, to aid curators in iterative improvement. We present an implementation of the Pantograph method, as a toolbox for genome-scale model reconstruction, curation and validation. This open source package can be obtained from: http://pathtastic.gforge.inria.fr.
This paper presents validation comparisons between field and laboratory measurements and a new probabilistic model for predicting ship underkeel clearance (UKC). Prototype ship motions and environmental data were obtained in May 1999 in the deep-draft entrance channel at Barbers Point, HI. These field measurements were reproduced in controlled laboratory studies in 2000 and 2002 with a model of the World Utility (WU) bulk carrier. These measurements constitute some of the data being used to validate the Corps's Channel Analysis and Design Evaluation Tool (CADET), a suite of programs to determine the optimum dredge depth for entrance channels. In general, the CADET predictions matched the field and laboratory measurements within cm-accuracy for wave heights that ranged from 45 cm to 75 cm.
The dynamic strain aging (DSA) on the mechanical behavior of AA5182 aluminum alloys is investigated through theoretical and numerical methods at a wide range of temperatures. This study proposes a physically-based constitutive model to explain the DSA-induced hardening of AA5182 aluminum alloys. This proposed constitutive model consists of two main components, athermal and thermal components. Between two main components, an athermal component contains an extra hardening caused by the DSA. To describe this hardening, a probability density function is introduced as a function of equivalent plastic strain, equivalent plastic strain rate, and temperature. The results show that this function can precisely describe characteristics of the DSA hardening. Experimental data obtained in the literature are utilized to examine the validity of the proposed model. Lastly, finite element (FE) solution for the proposed model is presented and validated by comparing with experimental data.
We proposed a new model validation method through ensemble empirical mode decomposition (EEMD) and scale separate correlation. EEMD is used to analyze the nonlinear and nonstationary ozone concentration data and the data simulated from the Taiwan Air Quality Model (TAQM). Our approach consists of shifting an ensemble of white noise-added signal and treats the mean as the final true intrinsic mode functions (IMFs). It provides detailed comparisons of observed and simulated data in various temporal scales. The ozone concentration of Wan-Li station in Taiwan is used to illustrate the power of this new approach. Our results show that, at an urban station, the ozone concentration fluctuation has various cycles that include semi-diurnal, diurnal, and weekly time scales. These results serve to demonstrate the anthropogenic origin of the local pollutant and long-range transport effects were all important. The validation tests indicate that the model used here performs well to simulate phenomena of all temporal scales.
Modeling and Simulation Technology has become an important means to study various complex systems with its extensive application. Thus, the accuracy of the simulation models becomes a critical problem and needs to be assessed by employing an appropriate model validation method. The simulation models often have multivariate dynamic responses with uncertainty, while most of the existing validation methods concentrate on the validation of the static responses. Hence, a new validation method is proposed in this paper to validate the dynamic responses of the simulation models over the time domain at a single validation site and multiple validation sites through introducing the discrete Chebyshev polynomials and area metric. For each time series, the orthogonal expansion coefficients are extracted primarily by representing the time series with the discrete orthogonal polynomials. Then, the area metric and the u-pooling metric are employed to validate all the uncorrelated coefficients at a single validation site and multiple validation sites, respectively, and the final validation result is obtained by summarizing the metric values. The feasibility and effectiveness of the proposed model validation method are illustrated through the example of the terminal guidance stage of the flight vehicle.
We report results of a “hindcast” experiment focusing on the agricultural and land-use component of the Global Change Assessment Model (GCAM). We initialize GCAM to reproduce observed agriculture and land use in 1990 and forecast agriculture and land use patterns on one-year time steps to 2010. We report overall model performance for nine crops in 14 regions. We report areas where the hindcast is in relatively good agreement with observations and areas where the correspondence is poorer. We find that when given observed crop yields as input data, producers in GCAM implicitly have perfect foresight for yields leading to over compensation for year-to-year yield variation. We explore a simple model in which planting decisions are based on expectations but production depends on actual yields and find that this addresses the implicit perfect foresight problem. Second, while existing policies are implicitly calibrated into IAMs, changes in those policies over the period of analysis can have a dramatic effect on the fidelity of model output. Third, we demonstrate that IAMs can employ techniques similar to those used by the climate modeling community to evaluate model skill. We find that hindcasting has the potential to yield substantial benefits to the IAM community.
The primary topic of the book is surrogate modeling and surrogate-based design of high-frequency structures. The purpose of the first two chapters is to provide the reader with an overview of the two most important classes of modeling methods, data-driven (or approximation), as well as physics-based ones. These are covered in Chapters 1 and 2, respectively. The remaining parts of the book give an exposition of the specific aspects of particular modeling methodologies and their applications to solving various simulation-driven design tasks such as parametric optimization or uncertainty quantification.
Data-driven models are by far the most popular types of surrogates. This is due to several reasons, including versatility, low evaluation cost, a large variety of matured methods, and — important from the point of view of practical utility — widespread availability through third-party toolboxes implemented in programming environments such as Matlab. This chapter covers the fundamentals of approximation-based modeling. We discuss the surrogate modeling flow, design of experiments, selected modeling methods (e.g., kriging, radial basis functions, support vector regression, or polynomial chaos expansion), as well as discuss model validation approaches. The presented material is intended to provide the readers who are new to the subject with the basics necessary to understand the remaining parts of the book. On the other hand, it is by no means exhaustive, and the readers interested in a more detailed exposition can refer to a rich literature of the subject (e.g., Queipo et al., 2005; Forrester and Keane, 2009; Biegler et al., 2014; Chugh et al., 2019; Jin, 2005; Santana-Quintero et al., 2010; Gorissen et al., 2009).
We propose a formal definition of backtestability for a statistical functional of a distribution: a functional is backtestable if there exists a back-test function depending only on the forecast of the functional and the related random variable, which is strictly monotonic in the former and has zero expected value for an exact forecast. We discuss the relationship with elicitability and identifiability which turn out to be necessary conditions for backtestability. The variance and the expected shortfall are not backtestable for this reason. We compare (absolute) model validation in the context of hypothesis tests, via backtest functions, versus (relative) model selection between competing forecasting models, via scoring functions. We define a backtest to be sharp when it is strictly monotonic with respect to the real value of the functional and not only to its forecast. This decides whether the expected value of the backtest determines also the prediction discrepancy and not only its significance. We show that the quantile backtest is not sharp and in fact it provides no information whatsoever on its true value. The expectile is also not sharp; we provide bounds for its true value, which are looser for outer confidence levels. We then introduce the notion of ridge backtests, applicable to particular non-backtestable functionals, such as the variance and the expected shortfall, which coincide with the attained minimum of the scoring function of another elicitable auxiliary functional (the mean and the value at risk, respectively). This permits approximated sharp backtests up to a small and one-sided sensitivity to the prediction of the auxiliary variable. The ridge mechanism explains why the variance has always been de facto backtestable and allows for similar efficient ways to backtest the expected shortfall. We discuss the relevance of this result in the current debate of financial regulation (banking and insurance), where value at risk and expected shortfall are adopted as regulatory risk measures.
The Bayesian semi-parametric curve-fitting procedure, based on a new, flexible, fast, and efficient mixture analysis idea assuming unknown number of components is applied to analyze SDSS data to study the relationship between apparent magnitude and redshift for quasars and the possibility of clusterings. The cosmological data analysis provides strong evidence against linear relationship, and clearly indicates the possibility of clustering of quasars specially at high redshift. This sheds new light not only on the issue of evolution, existence of acceleration or deceleration and environment around quasars (say, radio loud and radio quiet) at high redshift but also help us to estimate the cosmological parameters related to acceleration or decceleration.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.