Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Credit channel(s) of the monetary policy transmission has not been debated much in the literature especially in the context of the European monetary union (EMU) and the apparent rising fragmentation of the previously much integrated European banking system. This discussion is even more important in the aftermath of the global financial crisis (GFC) and the decade-long European debt crisis (EDC), a number of European countries have been experiencing. This paper attempts to investigate the interconnectedness of credit channels in policy transmission in the context of EMU using bank lending survey (BLS) data for a sample of eight European countries. One of the main contributions of this paper is to use BLS data for the entire 11 credit channels. We use principal component analysis (PCA) to investigate the impact of monetary policy on the interconnectedness structure of credit channels. PCA is conducted both for EMU across channels and sample countries. EFA-orthogonal vector rotation indicates a core versus periphery interconnectedness pattern. The results suggest that the household balance sheet channel, borrower cash flow channel and interest rate channel are the most divergent channels in EMU.
This study examines the total per capita health expenditure convergence across 25 Indian states by disaggregating total per capita health expenditure into revenue and capital health expenditure. This study uses the recent Lagrange Multiplier and Residual Augmented Least Squares-Lagrange Multiplier unit root and club convergence tests. Results of unit-root tests indicate the evidence of convergence for aggregate per capita health expenditure and its compositions. The speed of convergence of total per capita health expenditure is found to be approximately 0.15%. Results from sub-panels reveal that special-category states are converging. Based on the findings, it may be appropriate for the central government to design the equalizing transfer system and revisit poor states’ per capita health expenditure patterns.
Compared to the single-strategy particle swarm optimization (PSO) algorithm, the multi-strategy PSO shows potential advantages in solving complex optimization problems. In this study, a novel framework of the multi-strategy co-evolutionary PSO (M-PSO) is first proposed in which a matrix parameter pool scheme is introduced. In the scheme, multiple strategies are taken into account in the matrix parameter pool and new hybrid strategies can be generated. Then, the convergence analysis is made and the convergence conditions are provided for the co-evolutionary PSO framework when some operators are specified. Subsequently, based on the PSO framework, a novel multi-strategy co-evolutionary PSO is developed using Q-learning which is a classical reinforcement learning technique. In the proposed M-PSO, both the parameter optimization by the orthogonal method and the convergence conditions are embedded to improve the performance of the algorithm. Finally, the experiments are conducted on two test suites, CEC2017 and CEC2019, and the results indicate that M-PSO outperforms several meta-heuristic algorithms on most of the test problems.
In this paper, we are concerned with the approximation of functions by single hidden layer neural networks with ReLU activation functions on the unit circle. In particular, we are interested in the case when the number of data-points exceeds the number of nodes. We first study the convergence to equilibrium of the stochastic gradient flow associated with the cost function with a quadratic penalization. Specifically, we prove a Poincaré inequality for a penalized version of the cost function with explicit constants that are independent of the data and of the number of nodes. As our penalization biases the weights to be bounded, this leads us to study how well a network with bounded weights can approximate a given function of bounded variation (BV).
Our main contribution concerning approximation of BV functions, is a result which we call the localization theorem. Specifically, it states that the expected error of the constrained problem, where the length of the weights are less than R, is of order R−1/9 with respect to the unconstrained problem (the global optimum). The proof is novel in this topic and is inspired by techniques from regularity theory of elliptic partial differential equations. Finally, we quantify the expected value of the global optimum by proving a quantitative version of the universal approximation theorem.
Solving R. J. Daverman’s problem, V. S. Krushkal described sticky Cantor sets in ℝN for N≥4. Such sets cannot be isotoped off themselves by small ambient isotopies. Using Krushkal sets, we present a new series of wild embeddings related to a question of J. W. Cannon and S. G. Wayment (1970). Namely, for N≥4, we construct examples of compacta X⊂ℝN with the following two properties: some sequence {Xi⊂ℝN∖X, i∈ℕ} converges homeomorphically to X, but no uncountable family of pairwise disjoint sets Yα⊂ℝN exists such that each Yα is ambiently homeomorphic to X.
We give a connection between diffusion processes and classical mechanical systems in this paper. Precisely, we consider a system of plural massive particles interacting with an ideal gas, evolved according to classical mechanical principles, via interaction potentials. We prove the almost sure existence and uniqueness of the solution of the considered dynamics, prove the convergence of the solution under a certain scaling limit, and give the precise expression of the limiting process, a diffusion process.
Training methodology of the Back Propagation Network (BPN) is well documented. One aspect of BPN that requires investigation is whether or not the BPN would get trained for a given training data set and architecture. In this paper the behavior of the BPN is analyzed during its training phase considering convergent and divergent training data sets. Evolution of the weights during the training phase was monitored for the purpose of analysis. The evolution of weights was plotted as return map and was characterized by means of fractal dimension. This fractal dimensional analysis of the weight evolution trajectories is used to provide a new insight to understand the behavior of BPN and dynamics in the evolution of weights.
We propose a two layer neural network for computation of an approximate convex-hull of a set of points or a set of circles/ellipses of different sizes. The algorithm is based on a very elegant concept — shrinking of a rubber band surrounding the set of planar objects. Logically, a set of neurons is placed on a circle (rubber band) surrounding the objects. Each neuron has a parameter vector associated with it. This may be viewed as the current position of the neuron. The given set of points/objects exerts a force of attraction on every neuron, which determines how its current position will be updated (as if, the force determines the direction of movement of the neuron lying on the rubber band). As the network evolves, the neurons (parameter vectors) approximate the convex-hull more and more accurately. The scheme can be applied to find the convex-hull of a planar set of circles or ellipses or a mixture of the two. Some properties related to the evolution of the algorithm are also presented.
Multilayer feed-forward neural networks are widely used based on minimization of an error function. Back propagation (BP) is a famous training method used in the multilayer networks but it often suffers from the drawback of slow convergence. To make the learning faster, we propose 'Fusion of Activation Functions' (FAF) in which different conventional activation functions (AFs) are combined to compute final activation. This has not been studied extensively yet. One of the sub goals of the paper is to check the role of linear AFs in combination. We investigate whether FAF can enable the learning to be faster. Validity of the proposed method is examined by performing simulations on challenging nine real benchmark classification and time series prediction problems. The FAF has been applied to 2-bit, 3-bit and 4-bit parity, the breast cancer, Diabetes, Heart disease, Iris, wine, Glass and Soybean classification problems. The algorithm is also tested with Mackey-Glass chaotic time series prediction problem. The algorithm is shown to work better than other AFs used independently in BP such as sigmoid (SIG), arctangent (ATAN), logarithmic (LOG).
Motivated by Gage [On an area-preserving evolution equation for plane curves, in Nonlinear Problems in Geometry, ed. D. M. DeTurck, Contemporary Mathematics, Vol. 51 (American Mathematical Society, Providence, RI, 1986), pp. 51–62] and Ma–Cheng [A non-local area preserving curve flow, preprint (2009), arXiv:0907.1430v2, [math.DG]], in this paper, an area-preserving flow for convex plane curves is presented. This flow will decrease the perimeter of the evolving curve and make the curve more and more circular during the evolution process. And finally, as t goes to infinity, the limiting curve will be a finite circle in the C∞ metric.
The aim of this paper is to present a convex curve evolution problem which is determined by both local (curvature κ) and global (area A) geometric quantities of the evolving curve. This flow will decrease the perimeter and the area of the evolving curve and make the curve more and more circular during the evolution process. And finally, as t goes to infinity, the limiting curve will be a finite circle in the C∞ metric.
In this paper, we introduce two 1/κn-type (n≥1) curvature flows for closed convex planar curves. Along the flows the length of the curve is decreasing while the enclosed area is increasing. Finally, the evolving curves converge smoothly to a finite circle if they do not develop singularity during the evolution process.
The article presents a series of numerical simulations of exact solutions of the Einstein equations performed using the Cactus code, a complete three-dimensional machinery for numerical relativity. We describe an application ("thorn") for the Cactus code that can be used for evolving a variety of exact solutions, with and without matter, including solutions used in modern cosmology for modeling the early stages of the universe. Our main purpose has been to test the Cactus code on these well-known examples, focusing mainly on the stability and convergence of the code.
The aim of this work is to predict the economic convergence among countries by using a generalization of Ehrenfest's urn. In particular this work shows that the Ehrenfest model captures the convergence among countries. A empirical analysis is presented on the European Union countries, the G7 countries and the emerging countries.
A system of simultaneously triggered clocks is designed to be stabilizing: if the clock values ever differ, the system is guaranteed to converge to a state where all clock values are identical, and are subsequently maintained to be identical. For an N-clock system, the design uses N registers of 2logN bits each and guarantees convergence to identical values within N2 "triggers".
This study explores how income inequality and defence burden affects economic growth in different parts of the world. We follow an endogenous growth model that proposes a negative relationship of growth with income inequality and defence burden. The implications of the model are tested using panel data. The results suggest a negative relationship of growth with income inequality and defence burden. A by-product of this analysis is the conclusion regarding convergence. Our study finds no support for convergence across the world.
This paper examines the club-convergence and conditional convergence of economic growth of the major 15 states in India over the periods from 1993–1994 to 2004–2005 by using dynamic fixed effect growth models. The result finds that there is club-convergence within the middle income states. There is also evidence of the convergence of per capita income among Indian states by conditioning private investment and public investment along with other factors of economic growth. This paper is innovative in separating the significance of private investment from the public investment in the long-run dynamics of income in Indian states. This paper suggests that regional disparity in income can be reduced by equitable allocation of private investment and equitable distribution of public investment.
This paper evaluated productivity through metafrontier slacks-based measure, and explored the time and regional heterogeneity of productivity from static and dynamic perspectives. We also conducted convergence analysis on productivity. Our empirical results show that the east has the highest Technological Gap Ratio (TGR), which reveals that the east has the lowest production technology heterogeneity in China. Production heterogeneity has increased since 2004. FDI had negative effects on productivity, especially in the middle, west and northeast regions, which indicates FDI had a “crowding out” effect in China. Productivity displayed a convergence trend from 2004 to 2012. It is important for China’s local firms to enhance indigenous innovation and increase knowledge spillover in order to decrease their production technology gap.
We study properties of a modified memory gradient method, including the global convergence and rate of convergence. Numerical results show that modified memory gradient methods are effective in solving large-scale minimization problems.
A continuous approach using NCP function for approximating the solution of the max-cut problem is proposed. The max-cut problem is relaxed into an equivalent nonlinearly constrained continuous optimization problem and a feasible direction method without line searches is presented for generating an optimal solution of the relaxed continuous optimization problem. The convergence of the algorithm is proved. Numerical experiments and comparisons on some max-cut test problems show that we can get the satisfactory solution of max-cut problems with less computation time. Furthermore, this is the first time that the feasible direction method is combined with NCP function for solving max-cut problem, and similar idea can be generalized to other combinatorial optimization problems.