Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    TESTING FOR NEGLECTED NONLINEARITY USING EXTREME LEARNING MACHINES

    We introduce a statistic testing for neglected nonlinearity using extreme learning machines and call it ELMNN test. The ELMNN test is very convenient and can be widely applied because it is obtained as a by-product of estimating linear models. For the proposed test statistic, we provide a set of regularity conditions under which it asymptotically follows a chi-squared distribution under the null. We conduct Monte Carlo experiments and examine how it behaves when the sample size is finite. Our experiment shows that the test exhibits the properties desired by our theory.

  • articleNo Access

    THREE TESTING PROCEDURES FOR A COMPETING RISKS MODEL

    This paper is concerned with testing for the equality of failure rates in a competing risks model with two risks. Three testing procedures are investigated, namely Score test, Likelihood Ratio test and Wald test. Wald test has been considered to be the most powerful in multivariate linear regression analysis.1 However in our application, Wald test is the most efficient one when the failure rate of the second failure type is strictly smaller than the first failure type, otherwise the Score or the Likelihood Ratio test is preferred. This phenomenon is illustrated by data from a mechanical-switch life test.2 The results has been extended to a k-competing risks model. A simulation study is also given to examine the performance of the three tests.

  • chapterNo Access

    Chapter 9: Maximum Likelihood Estimation and Quasi-Maximum Likelihood Estimation

      Conditional probability distribution models have been widely used in economics and finance. In this chapter, we introduce two closely related popular methods to estimate conditional distribution models—Maximum Likelihood Estimation (MLE) and Quasi-MLE (QMLE). MLE is a parameter estimator that maximizes the model likelihood function of the random sample when the conditional distribution model is correctly specified, and QMLE is a parameter estimator that maximizes the model likelihood function of the random sample when the conditional distribution model is misspecified. Because the score function is an MDS and the dynamic Information Matrix (IM) equality holds when a conditional distribution model is correctly specified, the asymptotic properties of MLE is analogous to those of the OLS estimator when the regression disturbance is an MDS with conditional homoskedasticity, and we can use the Wald test, LM test and Likelihood Ratio (LR) test for hypothesis testing, where the LR test is analogous to the J · F test statistic. On the other hand, when the conditional distribution model is misspecified, the score function has mean zero, but it may no longer be an MDS and the dynamic IM equality may fail. As a result, the asymptotic properties of QMLE are analogous to those of the OLS estimator when the regression disturbance displays serial correlation and/or conditional heteroskedasticity. Robust Wald tests and LM tests can be constructed for hypothesis testing, but the LR test can no longer be used, for a reason similar to the failure of the F-test statistic when the regression disturbance displays serial correlation and/or conditional heteroskedasticity. We discuss methods to test the MDS property of the score function, and the dynamic IM equality, and correct specification of a conditional distribution model.

    • chapterNo Access

      Chapter 9: Hypothesis Testing

        Hypothesis testing is one of the two most important objectives in statistical inference. In this chapter, we will introduce basic concepts in hypothesis testing, and discuss the three fundamental principles of hypothesis testing — the Wald test, the Lagrange multiplier test, and the likelihood ratio test.

      • chapterNo Access

        Chapter 11: DUMMY VARIABLES AND ANOVA APPLICATION: TIME EFFECT ANOMALIES

          In this chapter, we consider the use of dummy variables or indicator variables as explanatory variables, and show how they are used in financial pricing.