Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The problem of optimizing accelerated production testing is a pressing one in most electronic manufacturing facilities. Yet, practical models are scarce in the literature, especially for testing high volumes of electronic circuit packs in failure-accelerating environments. In this paper, we develop both a log-linear and linear model, based initially on the Weibull distribution. The models developed are suitable for modeling accelerated production testing data from a temperature-cycled environment. The model is "piecewise" in that the failures in each discrete "piece" of the temperature cycle are modeled as if the testing was in parallel rather than sequential mode. An extra covariate is introduced to indicate age at the start of each piece. The failures in a piece then depend on the stress in the piece itself and the time elapsed to the start of the piece. This last dependence captures the influence of reliability growth and has the result of providing an alternative linear model to the log-linear one. The paper demonstrates a simpler use of Poisson regression. An application, using actual production data, is described. Uses of the Loglogistic, Logistic, Lognormal and Normal distributions are also illustrated.
Software measurement and modeling is intended to improve quality by predicting quality factors, such as reliability, early in the life cycle. The field of software measurement generally assumes that attributes of software products early in the life cycle are somehow related to the amount of information in those products, and thus, are related to the quality that eventually results from the development process.
Kolmogorov complexity and information theory offer a way to quantify the amount of information in a finite object, such as a program, in a unifying framework. Based on these principles, we propose a new synthetic measure of information composed from a set of conventional primitive metrics in a module. Since not all information is equally relevant to fault-insertion, we also consider components of the overall information content. We present a model for fault-insertion based on a nonhomogeneous Poisson process and Poisson regression. This approach is attractive, because the underlying assumptions are appropriate for software quality data. This approach also gives insight into design attributes that affect fault insertion.
A validation case study of a large sample of modules from a very large telecommunications system provides empirical evidence that the components of synthetic module complexity can be useful in software quality modeling. A large telecommunications system is an example of a computer system with rigorous software quality requirements.
The Murray–Darling Basin (MDB) is Australia’s prime agricultural region, where drought and hotter weather pose a significant threat to rural residents’ mental health – hence increasing their potential suicide risk. We investigate the impact of drought and hotter temperatures on monthly suicide within local areas in the MDB, from 2006–2016. Using Poisson fixed-effects regression modeling, we found that extreme drought and hotter temperatures were associated with increased total suicide rates. The effects of extreme drought and temperature on suicide were heterogeneous across gender and age groups, with younger men more vulnerable. Areas with higher percentages of Indigenous and farmer populations were identified as hot spots, and were vulnerable to increased temperatures and extreme drought. Green space coverage (and to some extent higher incomes) moderated the drought and suicide relationship. Providing targeted interventions in vulnerable groups and hot spot areas is warranted to reduce the suicide effect of climate change.
The main purpose of this introductory chapter is to give an overview of the following 130 papers, which discuss financial econometrics, mathematics, statistics, and machine learning. There are eight sections in this introductory chapter. Section 1 is the introduction, Section 2 discusses financial econometrics, Section 3 explores financial mathematics, and Section 4 discusses financial statistics. Section 5 of this introductory chapter discusses financial technology and machine learning, Section 6 explores applications of financial econometrics, mathematics, statistics, and machine learning, and Section 7 gives an overview in terms of chapter and keyword classification of the handbook. Finally, Section 8 is a summary and includes some remarks.