Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This study focuses on the application of artificial intelligence behavior constraints in the analysis of mechanical injuries in dance training, aiming to accurately analyze and effectively prevent mechanical injuries during dance training through the introduction of artificial intelligence technology. Dance, as a highly dependent art form on physical skills, often comes with a certain risk of mechanical injury during its training process. In this study, we first reviewed the relevant theories of mechanical injuries in dance training and analyzed the inherent relationship between dance movements and mechanical injuries. Subsequently, we utilized artificial intelligence technology to conduct behavior constraint analysis on the dance training process. By constructing a dance action recognition model, we achieved real-time monitoring and evaluation of dance training actions. On this basis, we further utilize the principles of mechanics to quantitatively analyze dance movements and extract key factors that affect mechanical damage. Through in-depth analysis and comparison, this study found that mechanical injuries in dance training are mainly influenced by various factors such as movement standardization, training intensity, and individual differences. We applied the theory of sports biomechanics to study sports dance injuries and analyzed the causes of athlete injuries. By exploring more scientific training methods and means, the correlation coefficients between main joint muscle strength, proprioception, tibialis anterior muscle imbalance response time, and rotational stability were measured to study proprioception training suitable for sports dance, adhering to the principle of gradual progression. In future sports dance rotation training, corresponding training should be carried out according to the characteristics of different rotation steps, providing reference for strengthening leg training and ankle training.
During PIXE analyses of biomedical samples often great damage of the sample occurs. This damage is always combined with mass loss and strong alterations of the initial structure. The loss of mass can be explained by the loss of volatile elements, mainly the major elements Hydrogen, Carbon, Nitrogen, and Oxygen. This results in a charge dependency of the element contents and has to be corrected during the data reduction. Here we describe measurements of different samples that monitor the mass loss and the charge dependency of the trace element concentrations. A model was developed to give the correct concentrations after the PIXE analyses. Additionally first results of PIXE analyses of gum tissue are reported. They give direct hints for amalgam residues inside the tissue in pathologically altered areas.
Elemental images of rice leaves with lesion parts were taken with an in-air submilli-PIXE camera at Tohoku University. The rice leaves were analyzed in vivo and it resulted that the elements of Ca and Mn are accumulated in the lesion parts. It was confirmed by checking the intactness of cells that the living leaves were not seriously damaged. The surface of japan bowls (the Edo period), Japanese vessels (Shigaraki wares, the Edo period) and wooden tablets (the Meiji period) were directly surveyed by submilli-beams (3 MeV protons). Japanese vessels and wooden tablets were not discolored by the beam irradiation. Shigaraki vessels were discolored first, but their color disappeared after 10 days from the irradiation. It is concluded that such samples are not seriously deteriorated in in-air PIXE analysis.
This review concerns radiation effects in silicon Charge-Coupled Devices (CCDs) and CMOS active pixel sensors (APSs), both of which are used as imagers in the visible region. Permanent effects, due to total ionizing dose and displacement damage, are discussed in detail, with a particular emphasis on the space environment. In addition, transient effects are briefly summarized. Implications for ground testing, effects mitigation and device hardening are also considered. The review is illustrated with results from recent ground testing.
Previously, the generalized luminosity ℒ was defined and calculated for all incident channels based on an NLC e+ e- design. Alternatives were then considered to improve the differing beam-beam effects in the e- e-, eγ and γγ channels. One example was tensor beams composed of bunchlets nijk implemented with a laser-driven, silicon accelerator based on micromachining techniques. Problems were considered and expressions given for radiative broadening due to bunchlet manipulation near the final focus to optimize luminosity via charge enhancement, neutralization or bunch shaping. Because the results were promising, we explore fully integrated structures that include sources, optics (for both light and particles) and acceleration in a common format - an accelerator-on-chip. Acceptable materials (and wavelengths) must allow velocity synchronism between many laser and electron pulses with optimal efficiency in high radiation environments. There are obvious control and cost advantages that accrue from using silicon structures if radiation effects can be made acceptable and the structures fabricated. Tests related to deep etching, fabrication and radiation effects on candidate amorphous and crystalline materials show Si(λL > 1.2μm) and fused SiO2(λL > 0.3μm) to be ideal materials.
Previously, the generalized luminosity was defined and calculated for all incident channels based on an NLC e+e- design. Alternatives were then considered to improve the differing beam-beam effects in the e-e-, eγ and γγ channels. Regardless of the channel, there was a large flux of outgoing, high energy photons that were produced from the beam-beam interaction e.g. beamstrahlung that needs to be disposed of and whose flux depended on
. One approach to this problem is to consider it a resource and attempt to take advantage of it by disposing of these straight–ahead photons in more useful ways than simply dumping them. While there are many options for monitoring the luminosity, any method that allows feedback and optimization in real time and in a non-intercepting and non-interfering way during normal data taking is extremely important – especially if it provides other capabilities such as high resolution tuning of spot sizes and can be used for all incident channels without essential modifications to their setup. Our "pin-hole" camera appears to be such a device if it can be made to work with high energy photons in ways that are compatible with the many other constraints and demands on space around the interaction region. The basis for using this method is that it has, in principle, the inherent resolution and bandwidth to monitor the very small spot sizes and their stabilities that are required for very high, integrated luminosity. While there are many possible, simultaneous uses of these outgoing photon beams, we limit our discussion to a single, blind, proof-of-principle experiment that was done on the FFTB line at SLAC to certify the concept of a camera obscura for high energy photons.
The classic plate theory (CPT) as a theoretical solution to an impact load has been used in a thin plate. However, The CPT is not any more useful solution for the impact load in the industrial power plant, which is generally constructed by the thick plate. In this paper a novel and effective approach is developed to determine the time history of the impact load on a thick aluminum plate based on the analysis of the acoustic waveforms measured by a sensor array located on the thick plate surface in combination with the theoretical Green's function for the plate. The Green's functions are derived based on either the exact elastodynamic or theory the approximate shear deformation plate theory (SDPT). If the displacement is measured on the plate, then the time history of impact load can be calculated by deconvolving the measured displacement with the theoretical Green's function. The reconstructed time history for impact load is compared with the time history of the impact load measured by the force transducer. A good prediction is found. This technique presents a valuable method for identification of source and may be applied to in-service structures under impact to signals recorded from acoustic emission of propagating cracks.
Based on the existed experimental results of 304 stainless steel, the evolution of fatigue damage during the stress-controlled cyclic loading was discussed first. Then, a damage-coupled visco-plastic cyclic constitutive model was proposed in the framework of unified visco-plasticity and continuum damage mechanics to simulate the whole-life ratcheting and predict the fatigue failure life of the material presented during the uniaxial stress-controlled cyclic loading with non-zero mean stress. In the proposed model, the whole life ratcheting was described by employing a non-linear kinematic hardening rule, i.e., the Armstrong-Frederick model combined with the Ohno-Wang model I, and considering the effect of fatigue damage. The damage threshold was employed to determine the failure life of the material. The simulated whole-life ratcheting and predicted failure lives are in a fairly good agreement with the experimental ones of 304 stainless steel.
Damage due to creep and plastic flow is assessed using destructive and non-destructive methods in steels (40HNMA and P91). In the destructive methods the standard tension tests were carried out after prestraining and variations of the selected tension parameters were taken into account for damage identification. In order to assess a damage development during the creep and plastic deformation the tests for both steels were interrupted for a range of the selected strain magnitudes. The ultrasonic and magnetic techniques were used as the non-destructive methods for damage evaluation. The last step of the experimental programme contained microscopic observations. A very promising correlation between parameters of methods for damage development evaluation was achieved. It is well proved for the ultimate tensile stress and birefringence coefficient.
The second Christchurch earthquake on February 22, 2011, Magnitude 6.35, generated more intense shaking in the Central Business District than the September 4, 2010 Darfield earthquake, Magnitude 7.1. The second earthquake was closer to the CBD and at shallow depth, resulting in peak ground accelerations 3 times higher. There was significant failure of unreinforced masonry buildings and collapse of a few reinforced concrete buildings, leading to loss of life. Steel structures on the whole performed well during the earthquake and the plastic, inelastic deformation was less than expected given the strength of the recorded ground accelerations. For steel buildings designed to withstand earthquake loading, a design philosophy is to have some structural elements deform plastically, absorbing energy in the process. Typically elements of beams are designed to plastically deform while the columns remain elastic. In the earthquake some of these elements deformed plastically and the buildings were structurally undamaged. The question which then arises is; the building may be safe, but will it withstand a further severe earthquake? In other words how much further plastic work damage can be absorbed without failure of the structural element? Previous research at Auckland on modern structural steel, where the steel was prestrained various levels, to represent earthquake loading, the toughness was determined, as a function of prestrain for the naturally strain-aged steel. Further research, on the same steel, investigated life to failure for cyclic plastic straining in tension and compression loading at various plastic strain amplitudes. This work has shown that provided the plastic strain in the structural element is in the range 2 – 5% the steel will still meet the relevant NZ Standards. To determine the remaining life the plastic strain must be determ ined then the decision made; to use the building as is, replace the structural element or demolish.
The paper completes the cycle of the research devoted to the development of the experimental bifurcation analysis (not computer simulations) in order to answer the following questions: whether qualitative changes occur in the dynamics of local climate systems in a centennial timescale?; how to analyze such qualitative changes with daily resolution for local and regional space-scales?; how to establish one-to-one daily correspondence between the dynamics evolution and economic consequences for productions? To answer the questions, the unconventional conceptual model to describe the local climate dynamics was proposed and verified in the previous parts. That model (HDS-model) originates from the hysteresis regulator with double synchronization and has a variable structure due to competition between the amplitude quantization and the time quantization. The main advantage of the HDS-model is connected with the possibility to describe “internally” (on the basis of the self-regulation) the specific causal effects observed in the dynamics of local climate systems instead of “external” description of three states of the hysteresis behavior of climate systems (upper, lower and transient states). As a result, the evolution of the local climate dynamics is based on the bifurcation diagrams built by processing the data of meteorological observations, where the strange effects of the essential interannual daily variability of annual temperature variation are taken into account and explained. It opens the novel possibilities to analyze the local climate dynamics taking into account the observed resultant of all internal and external influences on each local climate system. In particular, the paper presents the viewpoint on how to estimate economic damages caused by climate-related hazards through the bifurcation analysis. That viewpoint includes the following ideas: practically each local climate system is characterized by its own time pattern of the natural qualitative changes in temperature dynamics over a century, so, any unified time window to determine the local climatic norms seems to be questionable; the temperature limits determined for climate-related technological hazards should be reasoned by the conditions of artificial human activity, but not by the climatic norms; the damages caused by such hazards can be approximately estimated in relation to the average annual profit of each production. Now, it becomes possible to estimate the minimal and maximal numbers of the specified hazards per year in order, first of all, to avoid unforeseen latent damages. Also, it becomes possible to make some useful relative estimation concerning damage and profit. We believe that the results presented in the cycle illustrate great practical competence of the current advances in the experimental bifurcation analysis. In particular, the developed QHS-analysis provides the novel prospects towards both how to adapt production to climatic changes and how to compensate negative technological impacts on environment.
The paper continues the application of the bifurcation analysis in the research on local climate dynamics based on processing the historically observed data on the daily average land surface air temperature. Since the analyzed data are from instrumental measurements, we are doing the experimental bifurcation analysis. In particular, we focus on the discussion where is the joint between the normal dynamics of local climate systems (norms) and situations with the potential to create damages (hazards)? We illustrate that, perhaps, the criteria for hazards (or violent and unfavorable weather factors) relate mainly to empirical considerations from human opinion, but not to the natural qualitative changes of climate dynamics. To build the bifurcation diagrams, we base on the unconventional conceptual model (HDS-model) which originates from the hysteresis regulator with double synchronization. The HDS-model is characterized by a variable structure with the competition between the amplitude quantization and the time quantization. Then the intermittency between three periodical processes is considered as the typical behavior of local climate systems instead of both chaos and quasi-periodicity in order to excuse the variety of local climate dynamics. From the known specific regularities of the HDS-model dynamics, we try to find a way to decompose the local behaviors into homogeneous units within the time sections with homogeneous dynamics. Here, we present the first results of such decomposition, where the quasi-homogeneous sections (QHS) are determined on the basis of the modified bifurcation diagrams, and the units are reconstructed within the limits connected with the problem of shape defects. Nevertheless, the proposed analysis of the local climate dynamics (QHS-analysis) allows to exhibit how the comparatively modest temperature differences between the mentioned units in an annual scale can step-by-step expand into the great temperature differences of the daily variability at a centennial scale. Then the norms and the hazards relate to the fundamentally different viewpoints, where the time sections of months and, especially, seasons distort the causal effects of natural dynamical processes. The specific circumstances to realize the qualitative changes of the local climate dynamics are summarized by the notion of a likely periodicity. That, in particular, allows to explain why 30-year averaging remains the most common rule so far, but the decadal averaging begins to substitute that rule. We believe that the QHS-analysis can be considered as the joint between the norms and the hazards from a bifurcation analysis viewpoint, where the causal effects of the local climate dynamics are projected into the customary timescale only at the last step. We believe that the results could be interesting to develop the fields connected with climatic change and risk assessment.
Slender beams with small cracks described by Γ limits: a description of an elastic-perfectly plastic beam or rod is obtained as a variational limit of 2D or 3D bodies with damage at small scale satisfying the Kirchhoff kinematic restriction on the deformations.
A homogenization result is given for a material having brittle inclusions arranged in a periodic structure. According to the relation between the softness parameter and the size of the microstructure, three different limit models are deduced via Γ-convergence. In particular, damage is obtained as limit of periodically distributed microfractures.
The aim of this paper is to study the damage evolution in an elasto-piezoelectric body. The effect of the damage, due to internal tension or compression and caused by the opening and growth of micro-cracks and micro-cavities, and the piezoelectric effects are included into the model. The variational formulation leads to a coupled system of evolutionary equations. An existence and uniqueness result is then proved by using the theory of maximal monotone operators, the Schauder fixed-point theorem, and a comparison result.
This paper revolves around a newly introduced weak solvability concept for rate-independent systems, alternative to the notions of Energetic (E) and Balanced Viscosity (BV) solutions. Visco-Energetic (VE) solutions have been recently obtained by passing to the time-continuous limit in a time-incremental scheme, akin to that for E solutions, but perturbed by a “viscous” correction term, as in the case of BV solutions. However, for VE solutions this viscous correction is tuned by a fixed parameter. The resulting solution notion turns out to describe a kind of evolution in between Energetic and BV evolution. In this paper we aim to investigate the application of VE solutions to nonsmooth rate-independent processes in solid mechanics such as damage and plasticity at finite strains. We also address the limit passage, in the VE formulation, from an adhesive contact to a brittle delamination system. The analysis of these applications reveals the wide applicability of this solution concept, in particular to processes for which BV solutions are not available, and confirms its intermediate character between the E and BV notions.
The key problem to creating an autonomous system is: how does the brain choose its reactions, and how are motivation determined by ongoing signals, memory and heredity. In attempts to design a robot brain, several efforts have been made to design a self-contained control system that mimics biological motivation. However, it is impossible to develop an artificial brain using conventional computer algorithms, since a conventional program cannot predict all of the possible perturbations and disturbances in the environment, and hence cannot plan strategies that allow the system to overcome these perturbations and return to an optimal state. An external programmer must constantly update the system about the proper strategies needed to overcome newly-encountered perturbations. In contradistinction, living systems demonstrate excellent goal-directed behavior without the participation of an external programmer, and without full knowledge of the external environment. Biological motivation refers to actions on the part of an organism that lead to the attainment of a specific goal. When the organism attains the goal it is in an optimal state, and no further actions are generated. A deviation from the optimum will result in a change in activity that leads to a return to the optimum. Biologic motivations arise as the result of metabolic disturbances and are related to transient injury of the specific neurons. Treatments which protect neurons satisfy motivations and exert a psychotropic action relative to relief. I have developed a novel hypothesis of how living systems achieve a goal, based on data gathered on the effects of motivation on individual neurons. I claim that if the neuron affects the non-stability of its postsynaptic targets (probably by means of motivationally-relevant substances) in the end it chooses its reaction, although at each instant it acts by chance.
Coal–rock dynamic disasters seriously threaten safe production in coal mines, and an effective early warning is especially important to reduce the losses caused by these disasters. The occurrence of coal–rock dynamic disasters is determined by mining-induced stress loading and unloading. Therefore, it is of great significance to analyze the precursory information of coal deformation and failure during true triaxial stress loading and unloading. In this study, the deformation and failure of coal samples subjected to true triaxial loading and unloading, including fixed axial stress and unloading confining stress (FASUCS), are experimentally investigated. Meanwhile, acoustic emission (AE) during the deformation of coal samples is monitored, and the multi-fractal characteristics of AE are analyzed. Furthermore, combined with the deformation and failure of coal samples, the precursory information of coal deformation and rupture during true triaxial stress loading and unloading is obtained. Finally, the relationship between multi-fractal characteristics and damage evolution of coal samples under FASUCS is discussed. The results show that the multi-fractal spectral widths of AE time series under the conditions of FASUCS with different initial confining stresses or unloading rates are quite different, but the dynamic changes of multi-fractal parameters Δα and Δf are similar. This indicates that the microscopic complexity of AE events of coal samples under different conditions of FASUCS differs, but the macroscopic generation mechanism of AE events has inherent uniformity. The dynamic changes of Δα and Δf can reflect the stress and damage degree of coal samples. The dynamic change process of Δα well accords with the damage evolution process of coal samples. A gradual decrease of Δα corresponds to a slow increase of damage, while a sharp increase of it corresponds to a rapid growth of damage. At the same time, the mutation point of damage curve at distinct stress difference levels shares the same variation trend with the Δα mutation point. The change of Δα can reflect the damage process of coal samples, which can be used as precursor information for predicting coal–rock rupture. The finding is of great significance for the early warning of coal–rock dynamic disasters.
A novel gas diffusivity model for dry porous media with a damaged tree-like branching network is proposed by using the fractal theory in this study. We systematically investigated the effects of the number of damaged channels and the other structural parameters on the dimensionless gas diffusivity (DGD) and concentration drop. As the number of damaged channels increases, the DGD presents a decreasing trend, while the ratio of concentration drop shows a rising tendency. Meanwhile, the DGD is negatively correlated to the length exponent, the total number of branching levels, and the branching angle, respectively. On the other hand, the DGD is positively correlated with the diameter exponent. Besides, the ratio of concentration drop is negatively correlated with the length exponent and the total number of branching levels. However, it is positively associated with the diameter exponent and branching levels. In addition, during the calculation of the value of concentration drop, the total concentration drop can be disassembled into two equal-ratio sequences. And the scale factors in sequences are constants that are independent of the number of damaged channels. The reliability of the model predictions was verified by a comparison with the experimental data available in the literature. The physical mechanism of gas diffusion in the damaged network may be well explained by the proposed model.
Spontaneous imbibition has attracted considerable attention due to its extensive existence in nature. In this study, we theoretically explored the spontaneous imbibition dynamics in a damaged V-shaped tree-like branching network by comparing with a parallel net with fixed constraints. Additionally, the imbibition capacity is characterized by two dimensionless quantities: imbibition potential and dimensionless imbibition time. The fractal theory is then used to generate the analytical expressions of these two dimensionless quantities. After that, the influence of structural parameters on the imbibition process is systematically investigated. It is found that a larger number of damaged channels will correspond to the lower imbibition potential and dimensionless imbibition time. Notably, the branching number N has an evident enhancement effect on the imbibition potential. A parameter plane is introduced to visualize parameter combinations, enabling the direct evaluation of the imbibition process in a specific network system. The physical mechanisms revealed by the proposed model provide effective guidance for imbibition process analysis in the damaged tree-like networks.