Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper describes the investigation of bending strength and elastic wave signal characteristics of Si3N4 monolithic and Si3N4/SiC composite ceramics with crack healing ability. The elastic wave signals, generated during the compression load by a Vickers indenter on the brittle materials, were recorded in real time, and the AE signals were analyzed by the time-frequency analysis method. The three-point bending test was performed on the Si3N4 monolithic and Si3N4/SiC composite ceramic specimens with/without crack-healed. Consequently the bending strength of the crack-healed specimens at 1300°C was completely recovered up to that of the smooth specimens. And the frequency properties of crack-healed specimens tended to be similar to the distribution of the dominant smooth specimens frequency. This study suggests that the results of the signal information for the anisotropic ceramics show a feasible technique to guarantee structural integrity of a ceramic component.
On the basis of the lattice Boltzmann method for the Navier–Stokes equation, we have done a numerical experiment of a forced turbulence in real space and time. Our new findings are summarized into two points. Firstly, in the analysis of the mean-field behavior of the velocity field using the exit-time statistics, we have verified Kolmogorov's scaling and Taylor's hypothesis at the same time. Secondly, in the analysis of the intermittent velocity fluctuations using a non-equilibrium probability distribution function and the wavelet denoising, we have clarified that the coherent vortices sustain the power-law velocity correlation in the non-equilibrium state.
Blood component non-invasive measurement based on near-infrared (NIR) spectroscopy has become a favorite topic in the field of biomedicine. However, the various noises from instrument measurement and the varying background from absorption of other components (except target analyte) in blood are the main causes, which influenced the prediction accuracy of multivariable calibration. Thinking of backgrounds and noises are always found in high-scale approximation and low-scale detail coefficients. It is possible to identify them by wavelet transform (WT), which has multi-resolution trait and can break spectral signals into different frequency components retaining the same resolution as the original signal. Meanwhile, associating with a criterion of uninformative variable elimination (UVE), it is better to eliminate backgrounds and noises simultaneously and visually. Basic principle and application technology of this pretreatment method, wavelet transform with UVE criterion, were presented in this paper. Three experimental near-infrared spectra data sets, including aqueous solution with four components data sets, plasma data sets, body oral glucose tolerance test (OGTT) data sets, which, including glucose (the target analyte in this study), have all been used in this paper as examples to explain this pretreatment method. The effect of selected wavelength bands in the pretreatment process were discussed, and then the adaptability of different pretreatment method for the uncertainty complex NIR spectra model in blood component non-invasive measurements were also analyzed. This research indicates that the pretreatment methods of wavelet transform with UVE criterion can be used to eliminate varying backgrounds and noises for experimental NIR spectra data directly. Under the spectra area of 1100 to 1700 nm, utilizing this pretreatment method is helpful for us to get a more simple and higher precision multivariable calibration for blood glucose non-invasive measurement. Furthermore, by comparing with some other pretreatment methods, the results imply that the method applied in this study has more adaptability for the complex NIR spectra model. This study gives us another path for improving the blood component non-invasive measurement technique based on NIR spectroscopy.
Based on the continuous Wavelet Transform Modulus Maxima method (WTMM), a multifractal analysis was introduced to discriminate the irregular fracture signals of materials. This method provides an efficient numerical technique to characterize statistically the local regularity of fractures.
The results obtained by this nonlinear analysis suggest that multifractal parameters such as the capacity dimension D0, the average singularity strength α0, the aperture of the left side (α0 - αmin) and the total width (αmax - αmin) of the D(α) spectra allow a better fit for the characterization of the different fracture stages.
Discriminating the three principal stages of the fracture namely the fracture initiation, the fracture propagation and the final rupture, provides a powerful diagnostic tool to identify the crack initiation site, and thus delineates the causes of the cracking of the material.
Complex systems, as interwoven miscellaneous interacting entities that emerge and evolve through self-organization in a myriad of spiraling contexts, exhibit subtleties on global scale besides steering the way to understand complexity which has been under evolutionary processes with unfolding cumulative nature wherein order is viewed as the unifying framework. Indicating the striking feature of non-separability in components, a complex system cannot be understood in terms of the individual isolated constituents’ properties per se, it can rather be comprehended as a way to multilevel approach systems behavior with systems whose emergent behavior and pattern transcend the characteristics of ubiquitous units composing the system itself. This observation specifies a change of scientific paradigm, presenting that a reductionist perspective does not by any means imply a constructionist view; and in that vein, complex systems science, associated with multiscale problems, is regarded as ascendancy of emergence over reductionism and level of mechanistic insight evolving into complex system. While evolvability being related to the species and humans owing their existence to their ancestors’ capability with regards to adapting, emerging and evolving besides the relation between complexity of models, designs, visualization and optimality, a horizon that can take into account the subtleties making their own means of solutions applicable is to be entailed by complexity. Such views attach their germane importance to the future science of complexity which may probably be best regarded as a minimal history congruent with observable variations, namely the most parallelizable or symmetric process which can turn random inputs into regular outputs. Interestingly enough, chaos and nonlinear systems come into this picture as cousins of complexity which with tons of its components are involved in a hectic interaction with one another in a nonlinear fashion amongst the other related systems and fields. Relation, in mathematics, is a way of connecting two or more things, which is to say numbers, sets or other mathematical objects, and it is a relation that describes the way the things are interrelated to facilitate making sense of complex mathematical systems. Accordingly, mathematical modeling and scientific computing are proven principal tools toward the solution of problems arising in complex systems’ exploration with sound, stimulating and innovative aspects attributed to data science as a tailored-made discipline to enable making sense out of voluminous (-big) data. Regarding the computation of the complexity of any mathematical model, conducting the analyses over the run time is related to the sort of data determined and employed along with the methods. This enables the possibility of examining the data applied in the study, which is dependent on the capacity of the computer at work. Besides these, varying capacities of the computers have impact on the results; nevertheless, the application of the method on the code step by step must be taken into consideration. In this sense, the definition of complexity evaluated over different data lends a broader applicability range with more realism and convenience since the process is dependent on concrete mathematical foundations. All of these indicate that the methods need to be investigated based on their mathematical foundation together with the methods. In that way, it can become foreseeable what level of complexity will emerge for any data desired to be employed. With relation to fractals, fractal theory and analysis are geared toward assessing the fractal characteristics of data, several methods being at stake to assign fractal dimensions to the datasets, and within that perspective, fractal analysis provides expansion of knowledge regarding the functions and structures of complex systems while acting as a potential means to evaluate the novel areas of research and to capture the roughness of objects, their nonlinearity, randomness, and so on. The idea of fractional-order integration and differentiation as well as the inverse relationship between them lends fractional calculus applications in various fields spanning across science, medicine and engineering, amongst the others. The approach of fractional calculus, within mathematics-informed frameworks employed to enable reliable comprehension into complex processes which encompass an array of temporal and spatial scales notably provides the novel applicable models through fractional-order calculus to optimization methods. Computational science and modeling, notwithstanding, are oriented toward the simulation and investigation of complex systems through the use of computers by making use of domains ranging from mathematics to physics as well as computer science. A computational model consisting of numerous variables that characterize the system under consideration allows the performing of many simulated experiments via computerized means. Furthermore, Artificial Intelligence (AI) techniques whether combined or not with fractal, fractional analysis as well as mathematical models have enabled various applications including the prediction of mechanisms ranging extensively from living organisms to other interactions across incredible spectra besides providing solutions to real-world complex problems both on local and global scale. While enabling model accuracy maximization, AI can also ensure the minimization of functions such as computational burden. Relatedly, level of complexity, often employed in computer science for decision-making and problem-solving processes, aims to evaluate the difficulty of algorithms, and by so doing, it helps to determine the number of required resources and time for task completion. Computational (-algorithmic) complexity, referring to the measure of the amount of computing resources (memory and storage) which a specific algorithm consumes when it is run, essentially signifies the complexity of an algorithm, yielding an approximate sense of the volume of computing resources and seeking to prove the input data with different values and sizes. Computational complexity, with search algorithms and solution landscapes, eventually points toward reductions vis à vis universality to explore varying degrees of problems with different ranges of predictability. Taken together, this line of sophisticated and computer-assisted proof approach can fulfill the requirements of accuracy, interpretability, predictability and reliance on mathematical sciences with the assistance of AI and machine learning being at the plinth of and at the intersection with different domains among many other related points in line with the concurrent technical analyses, computing processes, computational foundations and mathematical modeling. Consequently, as distinctive from the other ones, our special issue series provides a novel direction for stimulating, refreshing and innovative interdisciplinary, multidisciplinary and transdisciplinary understanding and research in model-based, data-driven modes to be able to obtain feasible accurate solutions, designed simulations, optimization processes, among many more. Hence, we address the theoretical reflections on how all these processes are modeled, merging all together the advanced methods, mathematical analyses, computational technologies, quantum means elaborating and exhibiting the implications of applicable approaches in real-world systems and other related domains.
In this paper, Hermite wavelet method (HWM) is considered for numerical solution of 12- and 13-order boundary value problems (BVPs) of ordinary differential equations (ODEs). The proposed algorithm for HWM developed in Maple software converts the ODEs into an algebraic systems of equations. These algebraic equations are then solved by evaluating the unknown constants present in the system of equations and the approximate solution of the problem is obtained. Test problems are considered and their solutions are investigated using HWM-based algorithm. The obtained results from the test problems are compared with exact solution, and with other numerical methods solution in the existing literature. Results comparison are presented both graphically and in tabular form showing close agreement with exact solution, and greater accuracy than homotopy perturbation method (HPM) and differential transform method (DTM).
Studies reveal that the most prominent cause of bearing failure is a crack on any of its mating surfaces. When the crack is initiated, the bearing can still be used for some duration, but this is majorly depending upon the loading conditions. This work primarily focuses on the effects of different levels of static loading on the crack propagation after crack initiation. To analyze the effect of static loading, an axial groove defect was seeded on the outer race of a taper roller bearing randomly and bearing run continuously under five different static loading conditions. Initially, the bearing was made to run under loading conditions to initiate the crack naturally but the crack was not initiated even after 800 h of running. Therefore, crack was initiated artificially for the purpose of studying crack propagation. It was observed from the experimentation that in the case of maximum static load of 20 kg, the crack propagates rapidly in terms of area after 109 h of continuous running, whereas in the case of no load, it started propagating quickly after 267.5 h of running. Statistical analysis was also carried out for the recorded signals at different intervals of times, and it was observed that the Shannon entropy value was showing a sudden rise with the edge breakage (visually verified) while the crack was propagating. However, in the statistical analysis, none of the parameters showed a correlation with crack propagation. To develop the correlation of crack propagation, Shannon entropy of high, medium and low frequency bands of continuous wavelet-based (CWT) was carried out using different wavelets. Shannon entropy for high frequency band of CWT using Daubechies 10 as mother wavelet has responded well to the crack propagation as the value showed a sudden rise and an overall increase for edge breakage and crack propagation, respectively. A high frequency band of CWT using Daubechies 10 was found suitable for detecting edge breakage and crack growth at the same time because of its capability to respond to transient characteristics for a large duration of time.