Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Knowledge absorptive capacity (ACAP) has been recognized as a driving force for ICT-driven digital transformation; however, questions remain about how to maximize opportunities for implementing Industry 4.0 (I4.0) while considering the role of ACAP. Our primary objectives are to analyze how ACAP influences I4.0 opportunities (strategic, operational, and environmental/social) and to examine the mediating role of these opportunities in the connection between ACAP and I4.0 implementation. This empirical study, employing a predictive type, analyzed data from 200 manufacturing SMEs in Guanajuato, Mexico, using Partial Least Squares Structural Equation Modeling (PLS-SEM) via SmartPLS4, with a hierarchical component model Type I. The findings underscore that ACAP significantly enhances the potential for successful I4.0 implementation. Furthermore, the study highlights the critical importance of a strategic approach to leveraging I4.0 opportunities. Interestingly, the environmental and social opportunities showed limited influence on I4.0 adoption, with no evidence of their mediating role. This research contributes to illuminating the complex interplay between ACAP, I4.0 opportunities, and their collective contribution to the successful implementation of Industry 4.0 in Mexican SMEs. Finally, the study offers actionable insights for industrial practitioners and contributes to methodological advancements in PLS-SEM analysis.
Motivated by the theoretical link between real exchange rates and oil prices, we utilize a univariate moving average (MA) and an augmented MA (A-MA) model to generate multi-period forecasts of China’s real effective exchange rate for 2008–2018. The MA model utilizes past information in real exchange rates, and the A-MA model utilizes past information in both real exchange rates and oil prices. We show that the A-MA forecasts are unbiased and embody useful predictive information beyond that contained in the MA forecasts. In addition, the A-MA forecasts are directionally accurate under asymmetric loss. Such accurate forecasts are useful as inputs for policymakers to design an optimal real exchange rate policy to promote trade and attract foreign investment, and for foreign entities that regard China as an attractive environment for investing in various sectors.
In this paper we collect a number of technical issues that arise when constructing the matrix representation of the most general nuclear mean field Hamiltonian within which "all terms allowed by general symmetries are considered not only in principle but also in practice". Such a general posing of the problem is necessary when investigating the predictive power of the mean field theories by means of the well-posed inverse problem. [J. Dudek et al., Int. J. Mod. Phys. E21 (2012) 1250053]. To our knowledge quite often ill-posed mean field inverse problems arise in practical realizations what makes reliable extrapolations into the unknown areas of nuclei impossible. The conceptual and technical issues related to the inverse problem have been discussed in the above-mentioned topic whereas here we focus on "how to calculate the matrix elements, fast and with high numerical precision when solving the inverse problem" [For space-limitation reasons we illustrate the principal techniques on the example of the central interactions].
In this paper, a specific aspect of the prediction problem is considered: high predictive power is understood as a possibility to reproduce correct behavior of model solutions at predefined values of a subset of parameters. The problem is discussed in the context of a specific mathematical model, the gene circuit model for segmentation gap gene system in early Drosophila development. A shortcoming of the model is that it cannot be used for predicting the system behavior in mutants when fitted to wild type (WT) data. In order to answer a question whether experimental data contain enough information for the correct prediction we introduce two measures of predictive power. The first measure reveals the biologically substantiated low sensitivity of the model to parameters that are responsible for correct reconstruction of expression patterns in mutants, while the second one takes into account their correlation with the other parameters. It is demonstrated that the model solution, obtained by fitting to gene expression data in WT and Kr- mutants simultaneously, and exhibiting the high predictive power, is characterized by much higher values of both measures than those fitted to WT data alone. This result leads us to the conclusion that information contained in WT data is insufficient to reliably estimate the large number of model parameters and provide predictions of mutants.
Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called “sloppy” parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill’s, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.
The Malaysian economy suffered serious consequences from the 1997 Asian financial crisis. As a consequence, many listed companies became financially distressed due to mounting debts, huge accumulated losses, and poor cash flows. Under the provisions of Practice Note 4/2001 (PN4), issued by the Bursa Malaysia on February 15, 2001, 91 public listed companies, after fulfilling the criteria of PN4, were classified as financially distressed companies. Financial distress precedes bankruptcy; however, not all financially distressed companies will end up in bankruptcy. The main purpose of this paper is to use financial variables to predict potential financially distressed firms using the logistic regression model. Then the predictive ability of the prediction model was analyzed and the findings are encouraging and consistent for the sample analyzed and the period of study.
We discuss the predictive power of transverse-momentum-dependent distributions as a function of the kinematics and comment on recent extractions from experimental data.
In this chapter, we apply a three-stage approach using an intermediate classification period between the estimation and test periods. In the intermediate period, we stratify individual firms into deciles based on the predictive power of the Carhart 4-factor model, measured by the out-of-sample R-squared prediction in this period. Our motive for this stratification is that firms with poor out-of-sample predictive power of the estimated model are likely to suffer from coeffcient instability and that these instabilities will result in a mismeasurement of expected returns in the test period. The empirical results show that lower predictive power deciles have larger averaged absolute changes of estimated coeffcients, which is our proxy for coeffcient instability, and lower averaged out-of-sample R-squared in the test period.
The progression of theories suggested for our world, from ego- to geo- to helio-centric models to universe and multiverse theories and beyond, shows one tendency: The size of the described worlds increases, with humans being expelled from their center to ever more remote and random locations. If pushed too far, a potential theory of everything (TOE) is actually more a theories of nothing (TON). Indeed such theories have already been developed. I show that including observer localization into such theories is necessary and su_cient to avoid this problem. I develop a quantitative recipe to identify TOEs and distinguish them from TONs and theories in-between. This precisely shows what the problem is with some recently suggested universal TOEs.