This paper concerns the modeling of smart health sensors for data monitoring, where the data are eventually noisy, with correlated noise. In this work, we applied Clifford-based wavelets/multiwavelets for correlated noise in multi-sensors health data monitoring by estimating the set of sensor nodes minimizing an eventual error computed by the signal values at the selected nodes mostly caused by the correlated noise. Instead of directly minimizing the estimation error, we focused on evaluating a multi-level scheme based on multiwavelets for the estimation of the error between the parameter vector and its sub-vector of those nodes. Numerical simulations are provided with a comparison to some recent existing works. This model showed a performance and a fast time execution compared to those existing works. This model exceeds these models by the non-necessity to assume a priory structure of the data. Wavelets are capable to detect, localize, and eliminate the noise, even correlated, efficiently via the independent uncorrelated multiwavelets’ components.
In recent years, unconventional reservoirs have drawn tremendous attention worldwide. This special issue collects a series of recent works on various fractal-based approaches in unconventional reservoirs. The topics covered in this introduction include fractal characterization of pore (throat) structure and its influences on the physical properties of unconventional rocks, fractal characteristics of crack propagation in coal and fluid flow in rock fracture network under shearing, porous flow phenomena and gas adsorption mechanism, fractal geophysical method in reservoirs.
This study aims to develop effective predictive models to assess knee replacement (KR) risk in knee osteoarthritis (KOA) patients, which is important in the personalized diagnosis, assessment, and treatment of KOA. A total of 269KOA patients were selected from the osteoarthritis initiative (OAI) public database and their clinical and knee cartilage image feature data were included in this study. First, the clinical risk factors were screened using univariate Cox regression and then used in the construction of the Clinical model. Next, their image features were selected using univariate and least absolute shrinkage and selection operator (LASSO) Cox methods step by step, and then used in the construction of the Image model. Finally, the Image+Clinical model was constructed by combining the Image model and clinical risk factors, which was then converted into a nomogram for better visualization and future clinical use. All models were validated and compared using the metric of C-index. In addition, Kaplan–Meier (KM) survival curve with log-rank test and calibration curve were also included in the assessment of the model risk stratification ability and prediction consistency. Age and three Western Ontario and McMaster Universities (WOMAC) scores were found significantly correlated with KR, and thus included in Clinical model construction. Fifty-eight features were selected from 92knee cartilage image features using univariate cox, and four image features were retained using the LASSO Cox method. Image+Clinical model and nomogram were finally constructed by combining clinical risk factors and the Image model. Among all models, the Image+Clinical model showed the best predictive performance, and the Image model was better than the Clinical model in the KR risk predictive consistency. By determining an optimal cutoff value, both Image and Image+Clinical models could effectively stratify the KOA patients into KR high-risk and low-risk groups (log-rank test: p<0.05). In addition, the calibration curves also showed that model predictions were in excellent agreement with the actual observations for both 3-year and 6-year KR risk probabilities, both in training and test sets. The constructed model and nomogram showed excellent risk stratification and prediction ability, which can be used as a useful tool to evaluate the progress and prognosis of KOA patients individually, and guide the clinical decision-making of KOA treatment and prognosis.
The musculoskeletal system, containing bones, cartilage, skeletal muscles, tendons, ligaments, and some other tissues, is a perfect system that undergoes the external and internal load properly and controls the body’s motion efficiently. In this system, skeletal muscle is obviously indispensable. People have been studying the mystery of skeletal muscle mechanics for the last 80 years. Many modeling methods have been used to study skeletal muscles. Among these methods, multi-scale modeling methods are increasingly frequently used in studying musculoskeletal systems, especially those of skeletal muscles. In this review, we summarize the multi-scale modeling methods in studying works of skeletal muscle modeling reported so far. Then, several multi-scale methods of other tissues which possibly could be used in research on skeletal muscle modeling are discussed. Finally, the future research direction and the main challenges of multi-scale skeletal muscle modeling are briefly presented.
This paper describes the novel mathematical modeling of the 3-phase induction motor, various operations and study of its dynamic behavior and regenerative braking based on the model. This model uses v/f scheme to apply braking operation with the energy flow to the supply system instead of wasting the energy in braking resistor. It discusses the theory of induction motors, which is explored through both equations and computer simulation model using SIMULINK. The model also helps to study the behavior of the motor during its starting and load variation. The results from the analysis prove the demonstration of regenerative braking and the dynamic behavior and can give an opportunity to learn the different characteristics during these conditions.
This paper summarizes the approaches to and the implications of bottom–up infrastructure modeling in the framework of the EMF28 model comparison "Europe 2050: The Effects of Technology Choices on EU Climate Policy". It includes models covering all the sectors currently under scrutiny by the European Infrastructure Priorities: Electricity, natural gas, and CO2. Results suggest that some infrastructure enhancement is required to achieve the decarbonization, and that the network development needs can be attained in a reasonable timeframe. In the electricity sector, additional cross-border interconnection is required, but generation and the development of low-cost renewables is a more challenging task. For natural gas, the falling total consumption could be satisfied by the current infrastructure in place, and even in a high-gas scenario the infrastructure implications remain manageable. Model results on the future role of Carbon Capture, Transport, and Sequestration (CCTS) vary, and suggest that most of the transportation infrastructure might be required in and around the North Sea.
The recently proposed Equivalent Dipole Model for describing the electromechanical properties of ionic solids in terms of 3 ions and 2 bonds has been applied to PZT ceramics and lead-free single crystal piezoelectric materials, providing analysis in terms of an effective ionic charge and the asymmetry of the interatomic force constants. For PZT it is shown that, as a function of composition across the morphotropic phase boundary, the dominant bond compliance peaks at 52% ZrO2. The stiffer of the two bonds shows little composition dependence with no anomaly at the phase boundary. The effective charge has a maximum value at 50% ZrO2, decreasing across the phase boundary region, but becoming constant in the rhombohedral phase. The single crystals confirm that both the asymmetry in the force constants and the magnitude of effective charge are equally important in determining the values of the piezoelectric charge coefficient and the electromechanical coupling coefficient. Both are apparently temperature dependent, increasing markedly on approaching the Curie temperature.
The compositional dependence of ac conductivity (σac), real (σ′) and imaginary (σ′′) parts of complex electric conductivity (σ*) was investigated as a function of temperature (T) and frequency (f) for Mn0.7+xZn0.3SixFe2–2xO4, x=0.0, 0.1, 0.2 and 0.3 spinel ferrite system. The compositional dependence of lattice constant values suggested that the most of the substituted Si4+-ions reside at grain boundaries and only a few Si-ions are inside grains. The variation of σac(x, f, T) is explained on the basis of segregation and diffusion of Si4+ ions at grain boundaries and grains, respectively, and the electrode effect. Thermal variation of ac conductivity at fixed frequency suggested two different mechanisms which could be responsible for conduction in the system. It is found that σ* is not the preferred presentation for dielectric data and the scaling process of real part of conductivity by normalized frequency and the scaled frequency were found unsuccessful. The fitting results of ac conductivity data with path percolation approximation were found suitable in low-frequency regime while in high-frequency regime, effective medium approximation (EMA) was found successful.
It was determined that samples of styrene-butadiene rubber (SBR), containing highly aromatic oil, exhibit memory effects giving rise to dynamic elastic modulus, damping and internal stresses degree which can be tailored depending on the applied electric field strength. The capability and stability of the interaction process between aligned neighbor dipoles for exhibiting a memory effect, once the aligning electric field was removed are studied. It is determined that depending on the spatial arrangement and the amount of electric charge of the dipoles, this interaction is able to promote a memory effect which keeps the alignment between them. This electrostatic interaction plays the role of a counteracting effect for keeping the alignment, which was called electroelasticity. The results from the developed model were applied successfully to SBR composite samples for explaining the memory effects recorded from dynamic mechanical analysis (DMA) measurements under electric field. In addition, the model of the electric inclusion based on the inclusion theory for continuous media, was applied to determine the degree of internal stresses in the dielectric composite material due to the external applied electric field. In addition, from the coupling between the model developed here and simple issues related to the mechanical properties of composite materials, a procedure for determining the maximum possible gap between the electric dipoles in composite dielectric materials is also shown.
The ability of a structure in the form of a photovoltaic element with a built-in posistor layer based on a polymer nanocomposite with carbon filler being in direct thermal contact to protect against overvoltages was studied experimentally and by simulation. It was shown that the current and voltage on the reverse-biased p–n junction of the photovoltaic layer are limited and decrease from the moment when the temperature of this structure reaches values close to the tripping temperature of the posistor nanocomposite to the low-conductivity state. The temperature of the photovoltaic layer has a value close to the tripping temperature of the posistor layer, which is equal to ∼125∘C. The possibility of realizing protection against reverse electrical overvoltages and thermal breakdown of photovoltaic systems based on photovoltaic elements with built-in layers of posistor polymer nanocomposites with carbon fillers was established.
In this paper, we concern with the dynamic behaviors of a high speed mass measurement system with conveyor belt (a checkweigher). The goal of this paper is to construct a simple model of the measurement system so as to duplicate a response of the system. The checkweigher with electromagnetic force compensation can be approximated by the combined spring-mass-damper systems as the physical model, and the equation of motion is derived. The model parameters (a damping coefficient and a spring constant) can be obtained from the experimental data for open-loop system. Finally, the validity of the proposed model can be confirmed by comparison of the simulation results with the realistic responses. The simple dynamic model obtained offers practical and useful information to examine control scheme.
To align with the global goal of keeping temperature below 2∘C, a market-based initiative, “Emissions Trading System” (ETS), has been developed to mitigate climate change. However, while the carbon allowances traded at the ETS are mostly held and traded by polluting companies, financial actors engage in “speculation”, activities that might be detrimental to the functioning of the ETS have also invested in the ETS. By drawing from the big data archive of Google Trends, we construct a news-based speculation index to proxy for the role of speculation in the dynamics of carbon pricing. Given our preliminary finding of inherent volatility and the mixed-frequency nature of the dataset, we employ the GARCH-MIDAS econometric technique to test the hypothesis that an all-inclusive framework that reflects the emission compliance and emissions non-compliance dynamics of the ETS is the most accurate approach to modeling carbon prices. We show that higher speculation in the ETS fosters higher long-term volatility in carbon prices, that speculation is a good predictor of carbon prices, and that its positive impact on carbon price returns makes the ETS an attractive investment opportunity. We provide a data-driven framework upon which the growing debate about whether the behavior of the non-compliance emission actors in the ETS endangers or benefits the functioning of the ETS can be evaluated empirically.
“What do you want to be when you grow up?” is a common question posed to children, and answers such as firefighter, policeman, athlete, doctor, or teacher are probably just as common. Some, like Oliver Sacks, recall an early fascination with metals, the periodic table, and chemical reactions that planted the seeds for the later pursuit of the natural sciences or medicine (neurology in his case). We are familiar with memories of chemists that include their first chemistry set, followed by complaints by parents over strange smells and close calls due to particularly exothermic reactions. For others, including myself, a future in research remained more obscure until a later period in adolescence or perhaps even the undergraduate years. Rather than seeking out a field, the field finds you. In actuality, teachers and mentors with expertise in and enthusiasm for a field exert a force that charts a path toward scientific research throughout life. Here, I stress the importance of terrific teachers and mentors from high school onwards to the undergraduate, Ph.D., and postdoc years for setting me on a track (and on occasion preventing me from derailing) to research in structural chemistry and molecular mechanism using crystallography as the main tool.
The design of compounds selectively binding to specific isoforms of histone deacetylases (HDACs) is ongoing research to prevent adverse side effects. Two of the most studied isoforms are HDAC1 and HDAC6 which are important targets in various disease conditions. Here, various machine learning (ML) approaches were built and tested to predict the bioactivity and selectivity towards specific isoforms. The selectivities of compounds were precited using two different approaches: selectivity profiling and selectivity window approaches. The bioactivity models of HDAC1 and HDAC6 were used to determine the selectivities of compounds in the selectivity profiling approach. In the selectivity window approach, models were developed by directly training on the bioactivity differences of tested compounds against HDAC1 and HDAC6. In this study, firstly all available classification and regression ML algorithms in Python package were tested to find suitable algorithms. Five ML algorithms were selected based on their performances and algorithm differences. These models were compared to each other by using traditional evaluation metrics. Then, a consensus approach of the selected ML models was employed to compare the selectivity window and selectivity profiling approaches regarding their ability to distinguish HDAC1- and HDAC6-selective compounds from others. Here, the performances were also tested against an external set and it was seen that the selectivity window approach was performing slightly better in categorizing selective compounds than the selectivity profiling approach. The approaches presented in this study could be an important step to utilize for screening molecular libraries to discover selective inhibitors for targets.
Ca2+ is the most important second messenger controlling a variety of intracellular processes by oscillations of the cytosolic Ca2+ concentration. These oscillations occur by Ca2+ release from the endoplasmic reticulum (ER) into the cytosol through channels and the re-uptake of Ca2+ into the ER by pumps. A common channel type present in many cell types is the inositol trisphosphate receptor (IP3R), which is activated by IP3 and Ca2+ itself leading to Ca2+ induced Ca2+ release (CICR). We have shown in an experimental study [15], that Ca2+ oscillations are sequences of random spikes that occur by wave nucleation. We use here our recently developed model for Ca2+ dynamics in 3 dimension to illuminate the role of IP3R clustering within spatial extended systems.
Modeling of specification events during development poses new challenges to biochemical modeling. These include data limitations and a notorious absence of homeostasis in developing systems. The sea urchin is one of the best studied model organisms concerning development and a network, the Endomesoderm Network, has been proposed that is presumed to control endoderm and mesoderm specification in the embryo of Strongy-locentrotus purpuratus. We have constructed a dynamic model of a subnetwork of the Endomesoderm Network. In constructing the model, we had to resolve the following issues: choice of appropriate subsystem, assignment of embryonic data to cellular model, choice of appropriate kinetics. Although the resulting model is capable of reproducing fractions of the experimental data, it falls short of reproducing specification of cell types. These findings can facilitate the refinement of the Endomesoderm Network.
In this introductory chapter, we contextualize and briefly describe the intellectual contributions of the different chapters in this book. Following this chapter, which comprises Part I of this book, there are 11 chapters and each of these chapters addresses a particular research question or a set of questions about the creative class. Part II of this book consists of two chapters and this part focuses on alternate conceptual approaches to the creative class. Part III also contains two chapters and this part concentrates on analytics. Part IV consists of five chapters and this part sheds light on a variety of regional perspectives on the creative class. Finally, the two chapters that make up Part V take a retrospective and a prospective look at research on the creative class. In the concluding section of this chapter, we offer some reflections on the cornerstones of creative class theory as advocated by Richard Florida two decades ago.
The use of color in computer vision has received growing attention. This chapter introduces the basic principles underlying the physics and perception of color and reviews the state-of-the-art in color vision algorithms. Parts of this chapter have been condensed from [58] while new material has been included which provides a critical review of recent work. In particular, research in the areas of color constancy and color segmentation is reviewed in detail.
The first section reviews physical models for color image formation as well as models for human color perception. Reflection models characterize the relationship between a surface, the illumination environment, and the resulting color image. Physically motivated linear models are used to approximate functions of wavelength using a small number of parameters. Reflection models and linear models are introduced in Section 1 and play an important role in several of the color constancy and color segmentation algorithms presented in Sections 2 and 3. For completeness, we also present a concise summary of the trichromatic theory which models human color perception. A discussion is given of color matching experiments and the CIE color representation system. These models are important for a wide range of applications including the consistent representation of color on different devices. Section 1 concludes with a description of the most widely used color spaces and their properties.
The second section considers progress on computational approaches to color constancy. Human vision exhibits color constancy as the ability to perceive stable surface colors for a fixed object under a wide range of illumination conditions and scene configurations. A similar ability is required if computer vision systems are to recognize objects in uncontrolled environments. We begin by reviewing the properties and limitations of the early retinex approach to color constancy. We describe in detail the families of linear model algorithms and highlight algorithms which followed. Section 2 concludes with a subsection on recent indexing methods which integrate color constancy with the higher level recognition process.
Section 3 addresses the use of color for image segmentation and stresses the role of image models. We start by presenting classical statistical approaches to segmentation which have been generalized to include color. The more recent emphasis on the use of physical models for segmentation has led to new classes of algorithms which enable the accurate segmentation of effects such as shadows, highlights, shading, and interreflection. Such effects are often a source of error for algorithms based on classical statistical models. Finally, we describe a color texture model which has been used successfully as the basis of an algorithm for segmenting images of natural outdoor scenes.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.