In the past few decades, many significant insights have been gained into several areas of computational methods in sciences and engineering. New problems and methodologies have appeared in some areas of sciences and engineering. There is always a need in these fields for the advancement of information exchange.
The aim of this book is to facilitate the sharing of ideas, problems and methodologies between computational scientists and engineers in several disciplines. Extended abstracts of papers on the recent advances regarding computational methods in sciences and engineering are provided. The book briefly describes new methods in numerical analysis, computational mathematics, computational and theoretical physics, computational and theoretical chemistry, computational biology, computational mechanics, computational engineering, computational medicine, high performance computing, etc.
https://doi.org/10.1142/9789812704658_fmatter
PREFACE.
TABLE OF CONTENTS.
https://doi.org/10.1142/9789812704658_0001
In the equations systems generated for the GPS satellite observations, when the positional parameters are held fixed or constrained the clock receiver solutions are significantly affected, and this variable absorbs the non controlled effects. In this work, for the clock receiver solutions viewed a time serie, we analyze the components to detect periodicity in the no controlled effects, and built new mathematical models for control them.
https://doi.org/10.1142/9789812704658_0002
The need for absorbing boundary condition and coordinate stretching arises when one wishes to simulate the extension to infinity on a finite domain of computation for a problem. This paper develops the proper absorbing boundary operators for an elliptical partial differential equation for a magnetostatic problem. It then uses finite difference technique with the founded absorbing boundary condition in order to investigate its effect on the solution of the problem. In this procedure a finite distance is considered instead of the original infinite distance. Applying proper boundary conditions at that distance one can emulate the open boundary conditions. The absorbing boundary condition have been employed extensively to hyperbolic problems in which the solutions move at finite speed are limited in duration by the return of outwardly propagating feature of the solution.
In this problem we consider an elliptic magnetostatic problem and then try to develop absorbing boundary condition formulas of different orders and finally applying them to the problem using the finite difference technique.
https://doi.org/10.1142/9789812704658_0003
No abstract received.
https://doi.org/10.1142/9789812704658_0004
No abstract received.
https://doi.org/10.1142/9789812704658_0005
Nonlinear parabolic systems of partial differential equations are considered. In a recent work, we have proposed a new iterative method based on the eigenfunction expansion to integrate these systems. In this paper, we prove the convergence of the method on bounded time intervals under certain condition that can be more easily to satisfy. We then show that the solution obtained by the new method will converge to the exact solution for a problem in combustion theory. Moreover, we determine the number of iterations needed to obtain a solution with a predetermined level of accuracy. It is expected that the convergence analysis can be used for similar systems of time dependence.
https://doi.org/10.1142/9789812704658_0006
No abstract received.
https://doi.org/10.1142/9789812704658_0007
No abstract received.
https://doi.org/10.1142/9789812704658_0008
No abstract received.
https://doi.org/10.1142/9789812704658_0009
We formulate a new numerical method based on integration along characteristics curves to solve a size-structured cell population model in an environment with an evolutionary resource concentration. Numerical simulations are also reported in order to show that the approximations converge to the solution of the continuous problem. Also, we show their fine behaviour to study the dynamics of the considered problem.
https://doi.org/10.1142/9789812704658_0010
No abstract received.
https://doi.org/10.1142/9789812704658_0011
No abstract received.
https://doi.org/10.1142/9789812704658_0012
Hidden Markov Models (HMMs) have been widely used in applications in computational biology, during the last few years. In this paper we are reviewing the main algorithms proposed in the literature for training and decoding a HMM with labeled sequences, in the context of the topology prediction of bacterial integral membrane proteins. We evaluate the Maximum Likelihood algorithms traditionally used for the training of a Hidden Markov Model, against the less commonly used Conditional Maximum Likelihood-based algorithms and, after combining results previously obtained in the literature, we propose a new variant for Maximum Likelihood training. We compare the convergence rates of each algorithm showing the advantages and disadvantages of each method in the context of the problem at hand. Finally, we evaluate the predictive performance of each approach, using state of the art algorithms proposed for Hidden Markov Model decoding and mention the appropriateness of each one.
https://doi.org/10.1142/9789812704658_0013
We present a relaxed scheme with more precise information about local speeds of propagation and a multidimensional construction of the cell averages. Hence the physical domain of dependence is simulated correctly and high resolution is maintained by a genuinely multidimensional piecewise nonoscillatory reconstruction. Relaxation schemes have advantages that include high resolution, simplicity and explicitly no (approximate) Riemann solvers and characteristic decomposition is necessary. Performance of the scheme is illustrated by tests on two-dimensional Euler equations of gas dynamics.
https://doi.org/10.1142/9789812704658_0014
No abstract received.
https://doi.org/10.1142/9789812704658_0015
No abstract received.
https://doi.org/10.1142/9789812704658_0016
A composite function was used successfully for modelling the Natural Gas (NG) consumption in 16 European energy markets. Background of the model is a logistic function where the upper limit is also a logistic function of time, with secondary parameters determined either endogenously together with the rest primary parameters or exogenously in a sample space of the energy market. Fitting of this ‘double logistic’ dynamic model to NG consumption data of the period 1980-2000 gave better Standard Errors of Estimate (SEEs) for ten energy markets in comparison with the linear, the expone-ntial/asymptotic and the static logistic model. Supplementary results obtained by statistical analysis of answers selected by circulating a questionnaire in the wider area of Attica in Greece, led to the conclusion that income/welfare, residential place and information play an important role as regards the intention of inhabitants to adopt the NG alternative.
https://doi.org/10.1142/9789812704658_0017
An algorithmic procedure has been designed/developed for Computer Aided Dimensional Analysis (DA) of chemical engineering processes. The main purpose of this software is to construct/select the best combination of dimensionless groups describing adequately a process under certain criteria. The creation/operation of an Ontological Knowledge Base (OKB) plays a central role in this procedure as it provides, inter alia the means for filtering/reducing the dimensionless groups obtained by solving the system of dimensional equations according to the Buckingham Π Theorem. The successful implementation of this software is thoroughly presented step by step in a case of mass transfer (liquid drops moving in immiscible liquids).
https://doi.org/10.1142/9789812704658_0018
No abstract received.
https://doi.org/10.1142/9789812704658_0019
No abstract received.
https://doi.org/10.1142/9789812704658_0020
No abstract received.
https://doi.org/10.1142/9789812704658_0021
In the modelling of many physical systems, stiff differential equations need to be solved numerically. Because stiff problems require implicit methods, implementation costs are an important consideration in the assessment of contending algorithms. We will consider a number of alternatives to the popular backward difference methods. These standard algorithms are incapable of being both A-stable and of order greater than 2 and we therefore focus on multistage methods with special structures to guarantee low implementation costs. High order Runge–Kutta methods suffer the natural disadvantage of high-dimensionality in the algebraic systems required for stage evaluation. This is partly overcome by the use of singly-implicit methods and we will consider how far this approach can be taken. Further developments involve general linear methods and the key to finding good methods within this large family seems to be to restrict attention to methods with inherent Runge–Kutta stability.
https://doi.org/10.1142/9789812704658_0022
No abstract received.
https://doi.org/10.1142/9789812704658_0023
We present a least-squares fitting procedure to obtain a quartic force field by using energy and gradient data arising from B3LYP/cc-pVTZ calculations on a « simplex-sum » of Box and Behnken grid of points. We illustrate and we test for H2CO the quartic force field and the resulting vibrational anharmonic spectra performed from 44 simplex-sum configurations and we compare our results to those obtained by using the classical 168 energy calculations.
https://doi.org/10.1142/9789812704658_0024
In this paper, we describe a method to calculate prediction intervals for neurofuzzy networks used as predictive systems. The method also allows defining prediction intervals for the fuzzy rules that constitute the rule base of the neuro-fuzzy network, resulting in a more readable knowledge base. Moreover, the method does not depend on a specific architecture and can be applied to a variety of neuro-fuzzy models. An illustrative example is reported to show the validity of the proposed approach.
https://doi.org/10.1142/9789812704658_0025
No abstract received.
https://doi.org/10.1142/9789812704658_0026
No abstract received.
https://doi.org/10.1142/9789812704658_0027
We describe the design and implementation of a software system that, from high level specification documents, generates source code for the numerical valuation of real options. The documents allow the description of both the flexibility present in investment projects and the dynamics of the underlying stochastic variables, independently of any valuation method-specific details. By applying symbolic transformations to the specifications the system generates efficient and reusable software components, which are combined with software components that implement numerical methods in order for the final real option valuation code to be generated.
https://doi.org/10.1142/9789812704658_0028
In this paper an object oriented approach is used to visualize a family of enzymes that are widespread in nature and can be found in many animals and plant species. To achieve this assay, Unified Modeling Language (UML) is utilized to describe accurately and in detail the chemical information. In this survey we intend to provide multiple dimensional access to the inherent information concerning a biochemical process, and as an example the acid phosphatase is chosen.
https://doi.org/10.1142/9789812704658_0029
We discuss the results of a recent atomistic Monte Carlo simulation study of polyethylene (PE) melts grafted by one of their ends on a solid substrate. The simulations have been executed with a rather detailed forcefield which describes interactions at the level of individual atoms. The shortest length scale in the simulation is the Carbon-Carbon bond length l (=1.54Å). Results are presented for the main thermodynamic and conformational properties of these grafted layers in the vicinity of two types of surfaces: a non-interacting hard wall and a graphite basal plane.
https://doi.org/10.1142/9789812704658_0030
No abstract received.
https://doi.org/10.1142/9789812704658_0031
The correct estimation of conservative statements at the control volume surfaces has a serious influence in the accuracy of numerical solution and even improving the convergence history. In this paper, a sound physical pressure-weighted scheme is utilized to calculate the convective fluxes at cell faces. The method is extended in a manner which permits arbitrarily choice of element distributions and orientations in the solution domain, either entirely or partly, wherever the advantages of one distribution/orientation dominate those of others. In this regard, the necessary modifications which are needed to be undertaken in order to include the advantages of utilizing the unstructured element distributions/orientations in a finite element volume context are presented. Eventually, the extended formulations are validated against a standard benchmark test case.
https://doi.org/10.1142/9789812704658_0032
No abstract received.
https://doi.org/10.1142/9789812704658_0033
Sequential visualisation methods for the most widely used methods for the graphical representation of the Mandelbrot set are compared. Two groups of methods are presented. In the first, the Mandelbrot set (or its border) is rendered and, in the second, its complement is rendered. Examples of two-dimensional images obtained using these methods are also given.
https://doi.org/10.1142/9789812704658_0034
Three-dimensional molecular structure is fundamental in drug design and discovery, docking, and chemical function identification. The input to our algorithm consists of a set of approximate interatomic distances or distances constrained in intervals of varying precision; some are specified by the covalent structure and others by NMR experiments and application of the triangle inequality. The output is a valid molecular conformation in a specified neighborhood of the input. We aspire that our approach helps in detecting outliers of the NMR experiments, and that it manages to handle partial inputs. Numerical linear algebra methods are employed for reasons of speed and accuracy. The main tools include, besides iterative local optimization, distance geometry and matrix perturbations for minimizing singular values of real symmetric matrices. Our algorithm is able to bound the number of degrees of freedom on the conformation manifold. A public domain MATLAB implementation is described; it can determine a conformation of a molecule with 20 backbone atoms in 3.79sec on a 500MHz PENTIUM-III.
https://doi.org/10.1142/9789812704658_0035
In this work we propose a new algorithm to find out graphs isomorphism. This algorithm has been applied to calculate the similarity index in chemical compounds, representing the molecular structures like colored graph and these graphs as vectors on n-dimensional spaces. Hereby is possible to reduce the maximum common structure detection problem to a simple vector problem.
https://doi.org/10.1142/9789812704658_0036
In this paper we study approximate regularities of strings, that is, approximate periods, approximate covers and approximate seeds. We explore their similarities and differences and we implement algorithms for solving the smallest distance approximate period/cover/seed problem and the restricted smallest approximate period/cover/seed problem in polynomial time, under a variety of distance rules (the Hamming distance, the edit distance, and the weighted edit distance). We then analyse our experimental results to find out the time complexity of the algorithms in practice.
https://doi.org/10.1142/9789812704658_0037
We have recently presented a variant of the suffix tree which allows much larger genome sequence databases to be analysed efficiently. The new data structure, termed the distributed suffix tree (DST), is designed for distributed memory parallel computing environments (e.g. Beowulf clusters). It tackles the memory bottle-neck by constructing subtrees of the full suffix tree independently. The standard operations on suffix trees of biological importance are easily translatable to this new data structure. While none of these operations on the DST require inter-process communication, many have optimal expected parallel running times.
https://doi.org/10.1142/9789812704658_0038
1. The purpose of this work was to model and compare the zero-frequency detective quantum efficiency (DQE(0)) of Gd2O2S:Tb, Gd2O2S:Eu Gd2O2S:Pr, Gd2O2S: Pr, Ce, F and YTaO4:Nb granular scintillators for use X-ray medical imaging detectors. The work uses a mathematical model based on a photon diffusion differential equation. X-ray tube voltage was considered to vary from 10 and 200 kV while phosphor screen thickness ranged between 10 and 160 mg/cm2 coating.
https://doi.org/10.1142/9789812704658_0039
No abstract received.
https://doi.org/10.1142/9789812704658_0040
Electrical Impedance Epigastrography (EIE) is a non-invasive method that allows the assessment of gastric emptying rates without using ionizing radiation. This method works by applying an alternating current with a frequency of 32 kHz which can be varied by the operator from 1 to 4 mA, through electrodes over the epigastric region and measuring the potential difference between them. The post acquisition analysis relies on the Short-Time Fourier transform algorithm (STFT) to extract the gastric motility component of the signal (centre frequency of 0.05Hz with a bandwidth of 0.02Hz). The exact influence of motion artefacts was investigated by asking volunteers to carry out a variety of movements. It was clear that the motion artefacts produced positive and negative spikes on the acquired signal. Due to the large frequency range of a delta function, the spikes were producing false positives in the frequency domain, suggesting the presence of gastric motility when there was none, or exaggerating the magnitude of real events. Several attempts were made for removing motion artefacts and an appropriate algorithm was created. The efficacy of the algorithm was tested by using epigastrographic signals stored in a database at University of Surrey.
https://doi.org/10.1142/9789812704658_0041
This work studies the scavenging efficiencies of an average urban aerosol by means of filtration after a given mechanism of removal (coagulation, heterogeneous nucleation, and gravitational settling) as a function of time. Filtration is a simple, versatile, and economical mean for collecting samples of aerosol particles. The capture of aerosol particles by filtration is the most common method of aerosol sampling and is a widely used method for air cleaning. At low dust concentrations, fibrous filters are the most economical means for achieving high-efficiency collection of submicrometer particles. Aerosol filtration is used in diverse applications, such as respiratory protection, air cleaning of smelter effluent, processing of nuclear and hazardous materials, and clean rooms. The process of filtration is complicated, and although the general principles are well known there is a gap between theory and experiment. In this paper, we review filtration in order to provide an understanding of the properties of fibrous and porous membrane filters, the mechanisms of collection, and how collection efficiency and resistance to airflow change with filter properties and particle size.
https://doi.org/10.1142/9789812704658_0042
No abstract received.
https://doi.org/10.1142/9789812704658_0043
A computer-based image analysis system was developed for the automatic classification of brain tumours according to their degree of malignancy using Support Vector Machines (SVMs). Morphological and textural nuclear features were quantified to encode tumour malignancy. 46 cases were used to construct the SVM classifier. Best vector was obtained performing an exhaustive search procedure in feature space. SVM classifier gave 84.8% accuracy using the leave-one-out method. To validate the systems’ generalization to unseen data, 41 cases collected from a different hospital were utilized. For the validation unseen data set classification performance was 82.9%. The generalization ability of the proposed classification methodology was verified enforcing the belief that automatic characterization of brain tumours might be feasible in every day clinical routine.
https://doi.org/10.1142/9789812704658_0044
No abstract received.
https://doi.org/10.1142/9789812704658_0045
The feasibility of function of errors with fractional exponent for solving of a problem of optimization and tutoring of neural networks was theoretically explored. The analytical expressions for estimation of parameters of the models or weight factors were obtained. The algorithms were designed and the numerical experiment on actual economic datas was held, where the efficiency of an offered procedure is shown.
https://doi.org/10.1142/9789812704658_0046
This article presents a digital method of homogenization in thermo-elasticity based on the choice of a representative volume of the building material. Heterogeneities are generated in a random process. This numerical model developed by Mounajed is implemented in the general finite element code ‘SYMPHONIE’ of the CSTB. This stochastic method is applied to the calculation of the homogenized behavior of a High Strength Concrete. Furthermore, this method allows, through the process of location, to estimate the local field and to predict possible damages of the building material.
https://doi.org/10.1142/9789812704658_0047
No abstract received.
https://doi.org/10.1142/9789812704658_0048
Laser dynamics simulations have been carried out using a cellular automata model. The Shannon’s entropy has been used to study the different emergent behaviors exhibited by the system, mainly the laser spiking and the laser constant operation. It is also shown that the Shannon’s entropy of the distribution of the populations of photons and electrons reproduces the laser stability curve, in agreement with the theoretical predictions from the laser rate equations and with the experimental results.
https://doi.org/10.1142/9789812704658_0049
No abstract received.
https://doi.org/10.1142/9789812704658_0050
We report optimal structures, interaction energies and interaction induced dipole moments and polarizabilities for the van der Waals complex (H2O)2…He. Relying on Møller-Plesset perturbation theory with large, carefully optimized basis sets we have located the most stable configuration while the corresponding interaction energies were computed using coupled-cluster techniques. The potential energy surface (PES) of the complex has been determined using the MP2 method. The dependence of the calculated interaction properties on the basis set is also studied.
https://doi.org/10.1142/9789812704658_0051
We present results on self-consistent calculations of second pVT- virial coefficients B(T), viscosity data η(T) and diffusion coefficients ρD for heavy globular gases (BF3, CF4, SiF4, CCl4, SiCl4, SF6, MoF6, WF6, UF6, C(CH3)4, and Si(CH3)4) and their binary mixtures with globular gases, and Ar, Kr, and Xe, respectively. The calculations are performed mainly in the temperature range between 200 and 900K by means of isotropic n-6 potentials with explicitly temperature-dependent separation rm(T) and potential well-depth ε(T). In the case of the pure gases the potential parameters at T = 0 K (ε, rm, n) and the enlargement of the first level radii δ are obtained solving an ill-posed problem of minimizing the squared deviations between experimental and calculated values normalized to their relative experimental error. The temperature dependence of the potential is a result of the influence of vibrational excitation on binary interactions. The interaction potential of the binary mixtures are obtained with simple combination rules from the potential parameters of the neat gases. In all cases we observe excellent reproduction of the experimental thermophysical properties of the neat gases and the binary mixtures.
https://doi.org/10.1142/9789812704658_0052
In this article, we discuss the extended Bingham fluid model introduced in the paper [2] for electrorheological fluids, and formulate the problem in the axially symmetric cyllindrical coordinates system. As an application we choose the ER Shockabsorber, and present some numerical simulation of its behaviour.
https://doi.org/10.1142/9789812704658_0053
Spartan random fields have multivariate Gibbs probability distributions that are determined from a frugal set of parameters [1]. Thus, they provide a parsimonious model for representing the variability of spatially distributed processes. Potential applications include interpolation and simulation in geostatistical studies as well as methods for compressing large images. Here we develop methods for simulating Spartan fields with pre-determined parameters on regular lattices and at random locations in two spatial dimensions.
https://doi.org/10.1142/9789812704658_0054
No abstract received.
https://doi.org/10.1142/9789812704658_0055
An efficient molecular dynamics simulation method for solids is presented, where two different schemes are hybridized. This hybrid method can be processed at high speed over 10 times in comparison with conventional simulation methods.
https://doi.org/10.1142/9789812704658_0056
This paper presents a stylized model of a learning process through which power generating companies could adjust their supply bidding strategies in order to achieve profit- maximizing equilibrium in the form of Supply Function Equilibrium (SFE). The model is based on real market assumptions. Market players can form their behavior relying on market observations with no need of information on other players’ contracts and generation costs. We assume an asymmetric duopoly selling a homogenous commodity. Market conditions are characterized by linear demand and supply functions and in addition a constraint is imposed on one of the players’ quantity.
https://doi.org/10.1142/9789812704658_0057
Dynamics of nanoscopic arrays of monodomain magnetic elements is simulated by means of the Pardavi-Horvath algorithm. Experimental hysteresis loop is reproduced for the arrays of Ni, with period 100 nm and the mean coercive field 710 Oe. We investigate the fractal character of the cluster of elements with positive magnetic moments. No fractal is found. We apply also the technique of damage spreading. The consequences of a local flip of a magnetic element remain limited to a finite area. We conclude that the system does not show a complex behaviour.
https://doi.org/10.1142/9789812704658_0058
The solution of the two-dimensional time-independent Schrödinger equation is considered by partial discretization. We apply exponential-fitting methods for the solution of the discretized problem which is an ordinary differential equation problem. All methods are applied for the computation of the eigenvalues of the two-dimensional harmonic oscillator and the two-dimensional Henon-Heils potential. The results are compared with the results produced by full discretization.
https://doi.org/10.1142/9789812704658_0059
The purpose of the present study is the implementation of a classification system for differentiating healthy subjects from patients with depression. Twenty-five depressive patients and an equal number of gender and aged-matched normal controls were evaluated using a computerized version of the digit span Wechsler test. Morphological waveform features were extracted from the digitized Event-Related Potential (ERP) signals, recorded from 15 scalp electrodes. The feature extraction process focused on the P600 component of the ERPs. The designed system comprised two classifiers, the probabilistic neural network (PNN) and the cubic least-squares (CLS) minimum-distance, two routines, for feature reduction and feature selection, and an overall system evaluation routine, consisting of the exhaustive search and the leave-one-out methods. Highest classification accuracies achieved were 96% for the PNN and 94% for the CLS, using the ‘latency/amplitude ratio’ and ‘peak-to-peak slope’ two-feature combination. In conclusion, employing computer-based pattern recognition techniques with features not easily evaluated by the clinician, patients with depression could be distinguished from healthy subjects with high accuracy.
https://doi.org/10.1142/9789812704658_0060
The aim of this study was to compare the performance of the probabilistic neural network (PNN) classifier with the multilayer perceptron (MLP) classifier, in an attempt to discriminate between patients with diabetes mellitus type II (DMII) and normal subjects using medical images from brain single photon emission computed tomography (SPECT). Features from the gray-level histogram and the spatial-dependence matrix were generated from image-samples collected from brain SPECT images of diabetic patients and healthy volunteers, and they were used as input to the PNN and the MLP classifiers. Highest accuracies were 99.5% for the MLP and 99% for the PNN and they were achieved in the left inferior parietal lobule, employing the mean value and correlation features. Our findings show that the MLP classifier outperformed slightly the PNN classifier in almost all cerebral regions, but the lower computational time of the PNN makes him a very useful classification tool. The high precision of both classifiers indicate significant differences in radio-pharmaceutical (99mTc-ECD) uptake of diabetic patients compared to the normal controls, which may be due to cerebral blood flow disruption in patients with DMII.
https://doi.org/10.1142/9789812704658_0061
In the present paper we present results concerning the performance of a parallel Molecular Dynamics simulation of a Lennard-Jones liquid on a PC cluster consisting of four Pentium III processors running under Linux and using the MPI protocol. The methodology for the parallelization was the atom decomposition method in which each processor is assigned to deal with a given group of atoms. We examined the program performance for system sizes 102 to 105 atoms and number of processors varying from 1 to 4. The influence of the communication methods between processors was also examined. It was found that even such a small cluster can be a very useful and cost-effective solution for the realization of MD simulations of small Lennard-Jones liquid systems for real times up to 1µs within a reasonable computation time.
https://doi.org/10.1142/9789812704658_0062
Using molecular dynamics and a rigid ion potential we studied the vibrational properties of the NiO(110) surface. The simulations were carried out at T=300K and we calculated the phonon density of states (DOS) for the anion and cation sublattice as a function of the distance form the surface along the three directions parallel and normal to the surface. We discuss how the bulk DOS is altered as a function of the distance from the surface.
https://doi.org/10.1142/9789812704658_0063
A systematic study of the electric properties of substituted diacetylenes: H-C≡C-C≡C-X, -X = Li, Na, K, Al, Ga, In, F, Cl, Br, I, CN, NC, CP, PC, C6H5, C4N, C5H4N and N2B is presented. The electric properties that have been studied are the dipole moment (µα) the dipole polarizability (ααβ), the first (βαβγ) and the second (γαβγδ) dipole hyperpolarizability. The calculations have been performed with ab initio methods (Møller-Plesset Perturbation theory, Coupled Cluster techniques) of high predictive capability and flexible basis sets especially designed for (hyper)polarizability calculations.
https://doi.org/10.1142/9789812704658_0064
We report an investigation of the electric polarizability of sodium chloride clusters. Relying on conventional ab initio and density functional theory methods we find that the mean (per atom) dipole polarizability reduces strongly with cluster size.
https://doi.org/10.1142/9789812704658_0065
The problem of definition of parameters of thin anisotropic films used in a microelectronics on the basis of ellipsometrical measuring is explored. The method of definition of parameters of films with use of neuron networks is offered. The networks is trained in space of acceptable values of parameters of layered system. The algorithm of tutoring of a network grounded on a rule Widrow-Hoff. At tutoring the error of experimental data’s was taken into account. The neuron networks is applied for definition of parameters of uniaxial films of Langmuir-Blodgett dimethyl-3,4:9,10-perylene-bis(dicarboximide). The network has shown high performance, the results coincide with obtained other methods. The network can be applied for examination of layered systems.
https://doi.org/10.1142/9789812704658_0066
No abstract received.
https://doi.org/10.1142/9789812704658_0067
Multiaffine analysis is applied to the KPZQ (Kardar-Parisi-Zhang growth with Quenched disorder) model. In the previous work, we have found that the BDP (Ballistic Deposition growth with Power-law noise) model has a multiaffinity, and power-law noise tends to break the KPZ universality. In this paper, we report on the monotonic self-affine scaling of the KPZQ model. It is confirmed that the KPZ universality holds in higher order exponents. This implies that the BDP model and the KPZQ model can be classified by the multiaffine analysis.
https://doi.org/10.1142/9789812704658_0068
High Dimensional Model Representation (HDMR) is a newly developed technique which decomposes a multivariate function into a constant, univariate, bivariate functions and so on. These functions are forced to be mutually orthogonal by means of an orthogonality condition. The technique which is generally used for high dimensional input-output systems can be applied to various disciplines including sensitivity analysis, differential equations, inversion of data and so on. In this article we present a computer program that computes individual components of HDMR resolution of a given multivariate function. The program also calculates the global sensitivity indices. Lastly the results of the numerical experiments for different set of functions are introduced.
https://doi.org/10.1142/9789812704658_0069
No abstract received.
https://doi.org/10.1142/9789812704658_0070
No abstract received.
https://doi.org/10.1142/9789812704658_0071
Ab initio quantum mechanical studies are carried out for the conformeric and isomeric forms of several chlorine and bromine peroxides, XOOCl and XOOBr, (X=H, CH3, Cl, Br, I) of interest in stratospheric halogen chemistry. The calculations indicate interesting trends in the nature of halogen-oxygen bonding. In particular both the halogen-oxygen bond distances and the relative stability ordering results show a notable dependence on the ionic character of the bond and the electronegativity of X fragment.
https://doi.org/10.1142/9789812704658_0072
Modeling of enantiomeric separations in chromatography allows a prognosis whether the desired enantiomeric separation in a studied chromatographic system could be achieved and anticipates the elution order of particular enantiomeric analytes. The use of molecular mechanics for chiral chromatography can empower a modificafion of the available, design new chiral stationary phases and explore various types of interactions in chromatographic systems. Furthermore, an insight into the enantioseparation mechanisms can be obtained.
https://doi.org/10.1142/9789812704658_0073
The problem with constructing probability density functions (PDFs) of volatility is discussed. The first model we have used represents the simple Gaussian random walk model. The PDF is given in the form of a one-dimensional contour integral. The second model we have used is based on the joint multidimensional Student PDFs of returns. Such distributions are useful for the description of well established deviations from the Gaussian random walk, observed for financial time series, such as an approximate scaling of the PDFs of returns, heavy tails of return distributions, return-volatility correlations and long ranged volatility-volatility correlations. We have fixed free parameters of the Gaussian and modified multidimensional Student PDFs of returns over a short term period by fitting three to eight years of trade-by-trade quotes from the Eurex Bund, Bobl, DAX and EuroSTOXX futures contracts, and over the long term by fitting 100+ years of the DOW Jones 30 Industrial Averages (DJIA) and 50+ years of the Standard & Poors 500 (S&P 500) daily quotes. Two estimators are considered to quantify volatility. These are short-term dynamic and long-term static PDFs that are then constructed. The volatility distributions are compared with the historical ones of the Eurex Bund, Bobl, DAX and EuroSTOXX futures contracts and also with the historical distributions for the DJIA and the S&P 500 indices.
https://doi.org/10.1142/9789812704658_0074
This paper is concerned with the solutions of initial value problems of the Boltzmann-Peierls equation (BPE). This integro-differential equation describes the evolution of heat in crystalline solids at very low temperatures. The BPE describes the evolution of the phase density of a phonon gas. The corresponding entropy density is given by the entropy density of a Bose-gas. We derive a reduced three-dimensional kinetic equation which has a much simpler structure than the original BPE. Using special coordinates in the one-dimensional case, we can perform a further reduction of the kinetic equation. Making a one-dimensionality assumption on the initial phase density one can show that this property is preserved for all later times. We derive kinetic schemes for the kinetic equation as well as for the derived moment systems. Several numerical test cases are shown in order to validate the theory.
https://doi.org/10.1142/9789812704658_0075
Active suspension systems offer significant advantages over passive systems. The high power necessary to operate conventional active systems is leading to further investigation for low energy consuming systems.
The present work is a theoretical study of a low energy active suspension system. A half vehicle model is developed in order to simulate the active suspension system based on the change of its mechanical lever ratio. The degrees of freedom used are enough to simulate accurately real road conditions, whilst at the same time the models are not very complicated. Due to the horizontal motion of the spring involved, the energy consumption is much lower than in conventional systems, in which actuators are primary suspension elements.
Non-linear models are included. These were written in AUTOSIM, which was used to create programs in FORTRAN. The elimination of roll in manoeuvring, by appropriate movement of the actuators, created problems of discomfort and a conflict of these two factors is investigated. When roll is controlled, the control law in order to avoid jacking of the vehicle is defined and different lay-outs are examined. The work is contributing to the good design of a low energy automotive active suspension, mechanically compatible with contemporary systems.
https://doi.org/10.1142/9789812704658_0076
An implicit four-step, sixth-order, P-stable method for initial value problem of the form y″ = f (x, y) is suggested. It is recommended for systems with stiff oscillatory solutions. Only two stages required per step, instead of three stage methods found until now in the literature.
https://doi.org/10.1142/9789812704658_0077
The LSA machine is an effective method for predicting a class from linear separable data. LSA machine is based on the combination of Logarithmic Simulated Annealing with the Perceptron Algorithm. In this paper we present and compare the classification accuracy of the LSA machine on two medical databases a) the Winsconsin Breast Cancer Database, which is a binary database with two associated classes and b) the Diabetic Patient Management Database, which is a multicategory database with four associated classes. Many researchers use the Winsconsin Breast Cancer Database (WBCD) database, as a benchmark database for testing their systems. The WBCD database consists of 699 samples with 9 input attributes. The LSA machine is trained on 50% and 75% of the entire dataset and in both cases we obtain a classification accuracy of 98.8% on the remaining samples. This classification accuracy on the test set of samples, to our best of knowledge is the highest reported in the literature. The Diabetic Patient Management database consists of 746 samples with 18 input values and an associated class label denoting one of the four treatments for the patient. The LSA machine for comparison reasons is trained on the 646 samples of the database, obtaining stable classification accuracy over 74% for all four classes, with highest classification accuracy of 87%.
https://doi.org/10.1142/9789812704658_0078
This paper addresses the issue of mining encrypted data, in order to protect confidential information while permitting knowledge discovery. Common cryptographic algorithms are considered and their robustness against data mining algorithms is evaluated. Having identified robust cryptosystems, data are encrypted and well–known data mining techniques are applied on the encrypted data to produce classification rules which are then compared with those obtained from the initial non-encrypted databases.
https://doi.org/10.1142/9789812704658_0079
The dynamic behavior of a rotating flexible four-bar structure has been analyzed by Lagrange multiplier formulation, which includes a coupled of rigid and flexible generalized coordinates and they form a structure which can be defined as differential-algebraic equations (DAEs). A stable numerical algorithm for solving differential-algebraic equations is implemented. Automatic differentiation (AD) system (ADIFOR) are used to generate numerical values of Jacobian matrix of the constraint equation described in the formulation. Another AD tool AUTODERIVE is used further to get the second order derivative terms. Comparison between using hand-deriving and automatic differentiation is studied, and it shows that a great accuracy of implementing AD tools in the differential-algebraic equation solver for flexible structure.
https://doi.org/10.1142/9789812704658_0080
No abstract received.
https://doi.org/10.1142/9789812704658_0081
In the present contribution we compare the new Multitaper Filtering technique with the very popular Filter Diagonalisation Method. The substitution of a time-independent problem, like the standard Schrödinger equation, by a time dependent one from the Filter Diagonalisation Method allows the employment of and comparison with standard signal processing filtration machinery. The use of zero order prolate spheroidal tapers as filtering functions is here extended and exactly formulated using techniques originating from general investigations of prolate spheriodal wave functions. We investigate the modifications presented with respect to accuracy and general effectiveness. The approach may be useful in various branches of physics and engineering sciences including signal processing applications as well as possibly also in general time dependent processes.
https://doi.org/10.1142/9789812704658_0082
A steady-state, two-dimensional rivulet flow is introduced to model the two phase flow of air and water in a circular pipe. Gravity is ignored to simplify the analysis. Two kinds of rivulet geometry are introduced. One is the annulus rivulet, and the other is the circular arc rivulet. The relationship between pressure gradient and water flux is studied in both cases while holding flux of air constant. Specifically, two kinds of problems are defined for the circular arc rivulet: the inverse problem, which consists in calculating the size of the rivulet and pressure gradient needed to drive the flow for specified air and water fluxes, and the direct problem, which consists in calculating the size of the rivulet and velocities of fluids for given pressure gradient. Computations are done using FEMLAB.
https://doi.org/10.1142/9789812704658_0083
The notion of informational mobility that came along with the revolutionary development of digital media reconfigured radically the architectural practice. There are two main axes on which this transmutation has been observed. The one is constituted by the modifications in the procedure of the architectural production that leaded to more complete but standardized mass projects, in relation with advanced methods of construction. The other refers to the theoretical approach of architecture for a digitally altered world. The virtual augmentation of biologic perception revealed an amplified (broader) experiential universe, urging the manipulators of space to evolve. The integration of digital systems facilitates a spherical approach, including a plethora of parameters. The transcended outcome is a mutation to an architectural hybrid composed of mass and information.
https://doi.org/10.1142/9789812704658_0084
We propose a quantum correction transport model and apply a parallel adaptive refinement methodology to nanoscale semiconductor device simulation on a Linux cluster with MPI libraries. In the nano device simulation the quantum mechanical effect plays an important role. To model this effect a quantum correction Poisson equation is derived and replaced the classical one in the transport models. Our numerical method is mainly based on the adaptive finite volume method with a posteriori error estimation, constructive monotone iterative method, and domain decomposition algorithm. A 20 nm double-gate MOSFET is simulated with the developed simulator.
https://doi.org/10.1142/9789812704658_0085
We present a computational efficient nonlinear iterative method for calculating the electron energy spectra in single and vertically stacked InAs/GaAs semiconductor quantum dots. The problem is formulated with the effective one electronic band Hamiltonian, the energy and position dependent electron effective mass approximation, and the Ben Daniel-Duke boundary conditions. The proposed iterative method converges for all quantum dot simulations. Numerical results show that the electron energy spectra are significantly dependent on the number of coupled layers. For the excited states, the layer dependence effect has been found to be weak than that for the ground state.
https://doi.org/10.1142/9789812704658_0086
No abstract received.
https://doi.org/10.1142/9789812704658_0087
At sufficiently low temperatures T we expect all spins in the antiferromagnetic ground state for a finite net size. Then we apply magnetic field +H on spins “down” sublattice and −H on spins “up” sublattice to provoke spin reversals on all sites from initial configuration which now is metastable. At sufficiently big H, spin reversal takes place. The switching curve H(T) obtained from computer simulations does reflect different switching mechanism at very low, low and high temperature range corresponding to spin flip and domain wall movement. These two mechanisms may be observed in real samples. In this paper we discuss the onset of the two switching paths and analyze main trends of the switching curves for different model parameters such as the size N of the spin net or the coordination number Z.
https://doi.org/10.1142/9789812704658_0088
We present an analysis of the ab initio quantumchemical calculation of electric hyperpolarizability calculations in some systems of primary importance. Various computational aspects of these calculations are closely examined. Particular consideration is reserved to basis set effects and the systematic evaluation of the performance of theoretical methods.
https://doi.org/10.1142/9789812704658_0089
Speech signals can be considered as being generated by mechanical systems with inherently nonlinear dynamics. The purpose of this paper is to present an automatic segmentation method based on nonlinear dynamics with low computational cost. The fractal dimension is a measure of signal complexity that can characterize different voiced and unvoiced. The segmentation process is carried out in two stages: estimation of the fractal dimension using the method suggested by Katz [1] and detection of the stationarity of the fractal dimension by means of the value of the variance parameter computed over the smooth fractal dimension signal. Using this combination of techniques, a quick and automatic segmentation is obtained. Our experiments have been computed over recorder signals from a speech Spanish database (AHUMADA).
https://doi.org/10.1142/9789812704658_0090
Spatial point process models provide a large variety of complex patterns to model particular clustered situations. Due to model complexity, spatial statistics often relies on simulation methods. Probably the most common such method is Markov chain Monte Carlo (MCMC) which draws approximate samples of the target distribution as the equilibrium distribution of a Markov chain. Perfect simulation methods are MCMC algorithms which ensure that the exact target distribution is sampled. In this paper we focus on point field models that have been used as particular models of galaxy clustering in both cosmology and spatial statistics. We present simulation and estimation techniques for these models and analyze by an extensive simulation study their flexibility for cluster modeling, under a large variety of practical situations.
https://doi.org/10.1142/9789812704658_0091
Hypertension is a growing undesired symptom which damages health and threatens mostly the developed societies. It is estimated that 20% of the Greek population suffers from hypertension. Research efforts for the controlling of hypertension are focused in blocking Ang II release and more recently in competing Ang II binding on AT1 receptors. This latest approach generated the synthesis of losartan and promoted it in the pharmaceutical market (COZAAR). Other derivative drugs which fall into SARTAN’s class followed. To comprehend the stereoelectronic requirements which may lead to the better understanding of the molecular basis of hypertension, the stereochemical features of angiotensin II, its peptide antagonists sarmesin and sarilesin, synthetic peptide analogs, AT1 non-peptide antagonists commercially available as well as synthetic ones were explored. AT1 antagonists are designed to mimic the C-terminal part of Ang II [1]. In this aspect, it is proposed that the butyl chain of losartan may mimic the isopropyl chain of Ile, the tetrazole ring mimics the C-terminal carboxylate group and the imidazole ring the corresponding imidazole ring of His6. The drug design is based on the optimization of superimposition studies of losartan with C-terminal part of sarmesin [2].
https://doi.org/10.1142/9789812704658_0092
The numerical treatment of the ODE initial value problems is an intensively researched field. Recently the qualitative algorithms, such as monotonicity and positivity preserving algorithms are in the focus of investigation. For the dynamical systems the energy conservative algorithms are very important. In the case of Hamiltonian system the symplectic algorithms are very effective. This kind of algorithm is not adaptive, but doubtless they are powerful. The high-efficiency computers and the computer algebraic software systems allow us to create efficient adaptive energy conservative numerical algorithm for solving ODE initial value problems. In this article an adaptive numerical-analytical algorithm is suggested which very effectively can be applied for Hamiltonian systems, but the idea of construction can be adaptable for other initial value problems too, where there are some quantity preserved in time. The idea and the efficiency of the proposed algorithm will be presented by simple examples, such as the Lotka-Volterra and linear oscillator problems.
https://doi.org/10.1142/9789812704658_0093
No abstract received.
https://doi.org/10.1142/9789812704658_0094
Almost Runge-Kutta methods are a sub-class of the family of methods known as general linear methods, used for solving ordinary differential equations. They retain many of the properties of traditional Runge-Kutta methods, with some added advantages. The higher stage order enables cheap error estimators to be obtained. For some orders it also means a reduction in the number of internal stages required to obtain that order. We will introduce these methods and present some recent results.
https://doi.org/10.1142/9789812704658_0095
The Taylor series method is one of the earliest analytic-numeric algorithms for approximate solution of initial value problems for ordinary differential equations. The main idea of the rehabilitation of these algorithms is based on the approximate calculation of higher derivatives using well-known technique for the partial differential equations. The approximate solution is given as a piecewise polynomial function defined on the subintervals of the whole interval. This property offers different facility for adaptive error control. This paper describes several explicit Taylor series with implicit extension algorithms and examines its consistency and stability properties. The implicit extension based on a collocation term added to the explicit truncated Taylor series. This idea is different from the general collocation method construction, which led to the implicit R-K algorithms 13 It demonstrates some numerical test results for stiff systems herewith we attempt to prove the efficiency of these new-old algorithms.
https://doi.org/10.1142/9789812704658_0096
The solution of the one-dimensional time-independent Schrödinger equation is considered by exponential-fitting symplectic integrators. The Schrödinger equation is first transformed into a Hamiltonian canonical equation. Numerical results are obtained for the one-dimensional harmonic oscillator and the hydrogen atom.
https://doi.org/10.1142/9789812704658_0097
No abstract received.
https://doi.org/10.1142/9789812704658_0098
No abstract received.
https://doi.org/10.1142/9789812704658_0099
Monte Carlo techniques were applied to evaluate the performance of YAP scintillator for use in medical imaging applications. The energy range considered was from 50 to 800 keV and the thickness range from 5 to 30 mm. The absorption efficiency of YAP decreases rapidly in the energy range from 50 up to 200 keV. For higher energies up to 800 keV, slow variation with energy is exhibited. In the energy range 200-800 keV the scintillator absorbs energy mainly through Compton recoil electrons while at the 50-200 keV energy range the photoelectric process dominates even following a scatter event.
https://doi.org/10.1142/9789812704658_0100
We present a method for the accurate calculation of the complete spectrum of the Schrödinger equation in terms of B-splines polynomial basis. The method is capable to represent numerically the bound and continuum spectrum of complex atomic systems. The theoretical method is discussed, and an application to hydrogenic hamiltonian is given.
https://doi.org/10.1142/9789812704658_0101
The Dirac radial functions are expanded in polynomial B-spline basis, transforming the Dirac equation in a generalized eigensystem matrix problem. Due to the locality nature of the B-spline functions the matrix representation of all the involved operators are highly sparse. Diagonalization of the matrix equations provides the bound and continuum eigenstates. Comparison with the analytical solutions of the hydrogenic atomic systems is presented and application for the non-hydrogenic atomic systems. The generalization of the above programs to the case of exotic atomic systems and highly charged ionic systems is straightforward.
https://doi.org/10.1142/9789812704658_0102
The aim of this article is to develop model comparison techniques among three widely used discrete statistical distributions employed for estimating the outstanding claim counts in actuarial and practice. The statistical treatment is from the Bayesian point of view. We utilize the advanced computational technique of Reversible Jump Markov chain Monte Carlo algorithm to estimate the posterior odds among the different distributions for claim counts.
The results are compared for various data sets.
https://doi.org/10.1142/9789812704658_0103
No abstract received.
https://doi.org/10.1142/9789812704658_0104
In this work we present applications of data structuring techniques for string problems in biological sequences. We firstly consider the problem of approximate string matching with gaps and secondly the problem of identifying occurrences of maximal pairs in multiple strings. The proposed implementations can be used in many problems that arise in the field of Computational Molecular Biology.
https://doi.org/10.1142/9789812704658_0105
This paper considers the application of a novel optimization method, namely Particle Swarm Optimization, to compute Nash equilibria. The problem of computing equilibria is formed as one of detecting the global minimizers of a real–valued, non-negative, function. To detect more than one global minimizers of the function at a single run of the algorithm and address effectively the problem of local minima, the recently proposed Deflection technique is employed. The performance of the proposed algorithm is compared to that of algorithms implemented in the popular game theory software suite, GAMBIT. Conclusions are derived.
https://doi.org/10.1142/9789812704658_0106
No abstract received.
https://doi.org/10.1142/9789812704658_0107
No abstract received.
https://doi.org/10.1142/9789812704658_0108
No abstract received.
https://doi.org/10.1142/9789812704658_0109
A technique for nodal stress recovery and a posteriori error estimation is developed for linear elasticity problems. The nodal stress recovery technique is based on the error distribution obtained from variation of mapping function and a posteriori error is calculated from the recovered stress. A pronounced improvement in the recovered stress and the estimated error is observed by this method. In addition, results show that the estimated error can be considered as an upper bound.
https://doi.org/10.1142/9789812704658_0110
Computational methods on molecular sequence data (strings) are at the heart of computational molecular biology. A DNA molecule can be thought of as a string over an alphabet of four characters {a,c,g,t} (nucleotides), while a protein can be thought of as a string over an alphabet of twenty characters (amino acids). A gene, which is physically embedded in a DNA molecule, typically encodes the amino acid sequence for a particular protein. Existing and emerging algorithms for string computation provide a significant intersection between computer science and molecular biology.
https://doi.org/10.1142/9789812704658_0111
As the rapid growing number of WWW users, the hidden information becomes ever increasingly valuable. As the consequence of such phenomenon, mining Web data and analysing on-line users’ behaviour and their on-line traversal pattern have emerged into a new popular research area. Primarily based on the Web servers’ log files, the main objective of traversal pattern mining is to discover the frequent patterns in users’ browsing paths and behaviours. This paper presents a complete framework for web mining, allows users to pre-define physical constraints when analysing complex traversal patterns in order to improve the efficiency of algorithms and offer flexibility in producing the results.
https://doi.org/10.1142/9789812704658_0112
This paper is concerned with mining temporal features from web logs. We present two methods. The first one concerns the temporal mining of sequential patterns in which we use sequence data which are used as support for discovered patterns in order to find periodicity in web log data. The second one concerns an efficient method for finding periodicity in web log sequence data which handles missing sequences by dealing with the overlap problem.
https://doi.org/10.1142/9789812704658_0113
No abstract received.
https://doi.org/10.1142/9789812704658_0114
No abstract received.
https://doi.org/10.1142/9789812704658_0115
No abstract received.
https://doi.org/10.1142/9789812704658_0116
No abstract received.
https://doi.org/10.1142/9789812704658_0117
The phenomenon of birefringence, the anisotropy of the refractive index induced in light when it impinges on matter subject to static, generally spatially inhomogeneous, electric and/or magnetic induction fields, will be discussed. It will be shown how the subject presents a challenge for theory, computation and experiment.
https://doi.org/10.1142/9789812704658_0118
One of the mayor problems that clinical anesthesiologists face up within their everyday practice is related to intraoperative patient awareness. The sedation level that prevents these incidents should be granted by proper drug administration and dosing procedures. This paper presents the results of a feasibility study of a closed-loop controller for target controlled infusion of intravenous anaesthesia based upon different in-silico experiments.
https://doi.org/10.1142/9789812704658_0119
This article presents an efficient field discretization method based on fundamental solutions for elliptic boundary value problems, represented in this monogram by the steady-state heat conduction equation. The results have shown that the method is highly accurate and does not require the fine grid resolution that other techniques demand.
https://doi.org/10.1142/9789812704658_0120
No abstract received.
https://doi.org/10.1142/9789812704658_0121
No abstract received.
https://doi.org/10.1142/9789812704658_0122
This paper deals with the use of Artificial Intelligence Methods (AI) in the process of designing new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. The traditional methods involves a laborious and expensive trial-and-error procedure, but there is of a considerable importance the development of computer-assisted approaches towards the automation of molecular design.
https://doi.org/10.1142/9789812704658_0123
An explicit hybrid symmetric six-step method of algebraic order six is presented in this paper. The method has phase-lag of order ten. Numerical comparative results from the application of the new method to well known periodic orbital problems, clearly demonstrate the superior efficiency of the method presented in this paper compared with methods of the same characteristics.
https://doi.org/10.1142/9789812704658_0124
In a previous paper it was shown that, the use of the surface impedances concept, widely used already, can contribute to the construction of a universal model for eddy currents modeling in thin layers. This is the most fundamental part for a systematic approach on constructing an appropriate type of finite element analysis for weakly coupled magneto – thermo – mechanical phenomena in shell structures. At this point it would be very interesting, to remind, that this technique has been initially used for thin layers modeling. Such a structure can be represented by two surfaces with self - impedances Zaa and Zbb respectively, accompanied by a transfer impedance Ztr taking into account the interdependence of the phenomena on the two surfaces. Hence, the previous paper was devoted to the presentation of this universal model, which is accompanied by a new type of elements for its implementation. A second and a third part followed. The second part dealt with one directional coupling of magnetic and thermal phenomena, whilst the third one with one directional coupling of magnetic and mechanical phenomena very briefly and this the point from which, this paper starts.
https://doi.org/10.1142/9789812704658_0125
This work presents several guidelines in order to show that it is possible to choose in an adequate manner the parameters a, b and m of the linear congruential generator xn+1 = (a·xn + b) mod m in order to maximize the period of the generated series of pseudo-random numbers, in such a manner that the period obtained is close to the theoretical upper bound of m!.
https://doi.org/10.1142/9789812704658_0126
A near force-balanced system has been developed to reduce the deterioration of part quality occurring during machining process by proposing multiple active tools cutting simultaneously in the different direction and/or utilization of active/passive tools (supports) operating simultaneously. FEM is used to prove the feasibility of this idea by considering an aluminum hollow cylinder that is fixed at one end to represent a turning process of workpiece clamped in a chuck. Weakening material property by reducing the value of young’s modulus is selected as the material removal technique for this paper. Parametric study is also performed to analyze the effect of changing some relevant parameters whereas experimental validation of this concept in twin turning machine (CNC) is underway.
https://doi.org/10.1142/9789812704658_0127
A way leading from basic structural features in chemistry to chemical numbers and arithmetics has been shown One of the goals of this work was a query about conditions under which self-assemblage symmetries are formed and collapsed. It turns out that optimization through some criteria of quality results in the determination of specific symmetry datum-levels, leaving of which launches formation of chemically essential symmetry designs. The process can be measured due to fuzzy set based characteristics called overlapping and splitting that additionally detect collapses (catastrophes) of symmetries and chemical numbers to the very elementary forms. An enrichment of the chemical computer software has been achieved owing to novel relationships between structures underlying chemical processing and corresponding arithmetics of chemical numbers. Besides, certain mathematical groups have been specified for molecules of non-complex compounds.
https://doi.org/10.1142/9789812704658_0128
No abstract received.
https://doi.org/10.1142/9789812704658_0129
No abstract received.
https://doi.org/10.1142/9789812704658_0130
In this work, we present two Finite Element formulations of the Level Set method: the Streamline-Upwind/Petrov-Galerkin (SUPG) and the Runge-Kutta Discontinuous Galerkin (RKDG) schemes. Both schemes are constructed in such a way as to minimize the numerical diffusion inherently present in the discretized Level Set equations. In developing the schemes, special attention is given to the issues of mass-conservation and robustness. The RKDG Level Set formulation is original and represents the first attempt to apply the discontinuous Galerkin FE method to interface tracking. The performances of the two formulations are demonstrated on selected two-dimensional problems: the broken-dam benchmark problem and a mold-filling simulation. The problems are solved using unstructured triangulated meshes. We also provide comparison of our results with those obtained using the Volume-of-Fluid (VOF) method.
https://doi.org/10.1142/9789812704658_0131
This paper is an attempt to provide an XML based framework for modelling and representing multi-source electronic patient records (EPR) as part of an integrated single source environment. The main focus of this framework is to capture the dynamic features of EPR data sources for further analysis and knowledge discovery with the aid of OLAP and data mining.
https://doi.org/10.1142/9789812704658_0132
No abstract received.
https://doi.org/10.1142/9789812704658_0133
In this study a comparative evaluation of Support Vector Machines (SVMs) and Probabilistic neural networks (PNNs) was performed exploring their ability to classify superficial bladder carcinomas as low or high-risk. Both classification models resulted in a relatively high overall accuracy of 85.3% and 83.7% respectively. Descriptors of nuclear size and chromatin cluster patterns were participated in both best feature vectors that optimized classification performance of the two classifiers. Concluding, the good performance and consistency of the SVM and PNN models enforces the belief that certain nuclear features carry significant diagnostic information and renders these techniques viable alternatives in the diagnostic process of assigning urinary bladder carcinomas grade.
https://doi.org/10.1142/9789812704658_0134
The refinement of tetrahedral meshes is a significant task in many numerical and discretizations methods. The computational aspects for implementing refinement of meshes with complex geometry need to be carefully considered in order to have real-time and optimal results. In this paper we enumerate the relevant compu-tational aspects of tetrahedral refinement algorithms. For local and adaptive refinement we give numerical results of the computational propagation cost of a general class of longest edge based refinement and show the implications of the complex geometry in the global process.
https://doi.org/10.1142/9789812704658_0135
No abstract received.
https://doi.org/10.1142/9789812704658_0136
No abstract received.
https://doi.org/10.1142/9789812704658_0137
The purpose of this work was to process carotid plaque ultrasound images, employing pattern recognition methods, for assessing the embolization risk factor associated with carotid plaque composition. Carotid plaques of 56 ultrasound images displaying carotid artery stenosis were categorized by means of the gray scale median (GSM) as high risk of causing brain infarcts (GSM≤50 gray level) and low-risk (GSM>50 gray level), and in accordance with the physician’s assessment and final clinical outcome. In each plaque image, the ratio of echo-dense to echo-lucent area was automatically calculated and it was combined with other textural features, calculated from the image histogram, the co-occurrence matrix, and the run-length matrix. These features were employed as input to two classifiers, the quadratic Bayesian (QB accuracy 91.7%) and the support vector machine (SVM accuracy 100%), which were trained to characterize plaques as either high risk or low risk of causing brain infarcts.
https://doi.org/10.1142/9789812704658_0138
No abstract received.
https://doi.org/10.1142/9789812704658_0139
A distimulus chromatic detection system allied with fibre optic light transmission has been used in the development of a low cost and accurate pH measurement system. The performance of the chromatic pH measurement system is compared with a number of optical measurement techniques, which are based on intensity modulation. The chromatic modulation technique has been shown to have advantages over intensity modulation, such as greater immunity to fibre bending, and maintaining calibration when extending the length of the optical fibres used to address the modulator.
https://doi.org/10.1142/9789812704658_0140
The assessment of sonographic findings of thyroid nodules in medical praxis is dependent on the examiner’s experience and subjective evaluation. Thus, it is obvious that a quantitative method could be of value to diagnosis. This study evaluates the risk factor of malignancy of thyroid nodules by means of an automatic image analysis system designed and implemented in C++ for processing B-mode sonographic images. Precision accuracies of 95% and 93.3% were achieved by means of the Neural Network Multi-layer Perceptron and the Support Vector Machines classifiers respectively. The proposed image analysis system combined with either classifier may be indicative of thyroid nodule’s malignancy-risk and may be of value to patient management.
https://doi.org/10.1142/9789812704658_0141
Ultrasound imaging involves signals, which are obtained by coherent summation of echo signals from scatterers in the tissue. This accumulation results in a speckle pattern, which constitutes unwanted noise and causes image degradation. Suppression of speckle-noise is desirable in order to enhance the quality of ultrasonic images and therefore to increase the diagnostic potential of ultrasound examination. In this paper we introduce a denoising technique for medical ultrasound images based on singularity detection through the evolution of the wavelet transform modulus maxima across scales. The algorithm differentiates the image components from the noise by selecting the wavelet transform modulus-maxima that correspond to the image singularities via the Lipschitz exponents. The performance of the proposed algorithm was tested in 145 B-scan thyroid images (pathological and normal) and demonstrated an effective suppression of speckle while preserving resolvable details (edges and boundaries).
https://doi.org/10.1142/9789812704658_0142
In this paper new symplectic Runge–Kutta methods with minimal dispersion and dissipation errors are developed. The proposed schemes are more efficient than the classical Runge–Kutta schemes for computational acoustics problems.
https://doi.org/10.1142/9789812704658_0143
No abstract received.
https://doi.org/10.1142/9789812704658_0144
No abstract received.
https://doi.org/10.1142/9789812704658_0145
We present Molecular Dynamics simulation results based on a semi-empirical potential model in analogy to the Tight Binding scheme in the second moment approximation, concerning the Pb overlayer behaviour deposited on the Cu(111) surface as a function of concentration. We found that the adlayer’s character changes from fluid to solid as the Pb concentration passes a characteristic value θc=37.5%. Specifically, for concentration less than θc the deposited Pb atoms exhibit the behaviour of a dilute fluid, while above θc 2D liquid like character appears, to recover typical solid behaviour above the saturation concentration that is dictated from the lattice mismatch at θs=56.3%. These conclusions are deduced from the calculated structural and diffusive properties of the overlayer, namely the Pb lattice parameter and its relative relaxed positions with respect to the bulk lattice spacing as well as the atomic diffusion coefficient. It is found that for concentrations up to θc the adlayer exhibits important expansion with Pb atoms flowing over the substrate and diffusing very fast, while at θs the Pb adlayer is compressed by as much as – 2.65% with respect to the lattice spacing of the bulk Pb material, in agreement with experimental findings.
https://doi.org/10.1142/9789812704658_0146
No abstract received.
https://doi.org/10.1142/9789812704658_0147
A robust method for calculating molecular weight and long chain branching distribution in free radical polymerization is proposed in this work. This method is based on direct integration of large nonlinear integro-differential equations system describing the conservation of “dead” polymer and “live” radicals in the reactor. A fairly general kinetic mechanism was employed to describe the complex kinetics of homo- and co- polymerization in presence of branching reactions such as transfer to polymer and terminal double bond polymerization. To simplify calculations and reduce the order of prohibitively large nonlinear system the long chain hypothesis in addition to quasi steady state and continuous variable approximation were applied to “live” radical mass balances. This method was employed to calculate molecular weight and long chain branching distribution of poly(p-methyl styrene) and poly(vinyl acetate) produced by bulk homopolymerization in a batch reactor. Assumptions are justified by comparing calculated distributions, with experimental data and solutions obtained without invoking any assumption. Number and weight average molecular weight of the calculated molecular weight distributions are in excellent agreement with experimental data. This methodology is extended to branched copolymers to obtain the total weight molecular weight distribution by assuming that the respective copolymer composition distribution is uniform. In all cases, the obtained number and weight average molecular weight of calculated molecular distributions are in excellent agreement with obtained independently by moments’ method. Results are presented, showing the effect of branching reactions on molecular weight distribution of copolymers. It is believed that present method can be applied to other free radical polymerization systems, to calculate the molecular weight and long chain branching distribution, leading thus to a more rational design of polymerization reactors.
https://doi.org/10.1142/9789812704658_0148
No abstract received.
https://doi.org/10.1142/9789812704658_0149
No abstract received.
https://doi.org/10.1142/9789812704658_0150
Recently, the use of smell in clinical diagnosis has been rediscovered due to major advances in odour sensing technology and artificial intelligence. Urinary tract infections are a serious health problem producing significant morbidity in a vast number of people each year. A newly developed “artificial nose” based on chemoresistive sensors has been employed to identify in vivo urine samples from 45 patients with suspected uncomplicated UTI who were scheduled for micro-biological analysis in a UK Health Laboratory environment. An intelligent model consisting of an odour generation mechanism, rapid volatile delivery and recovery system, and a classifier system based on artificial based techniques has been developed. The implementation of an advanced hybrid neuro-fuzzy scheme and the concept of fusion of multiple classifiers dedicated to specific feature parameters has been also adopted in this study. The experimental results confirm the validity of the presented methods.
https://doi.org/10.1142/9789812704658_0151
Understanding folding mechanisms by which proteins find their native and functional forms is a crucial issue in the post-genomic era. Molecular dynamics and Monte Carlo simulations are often used to simulate the folding process. Here we show that the activitation relaxation technique allows for the discovery of new folding mechanims for simple protein models.
https://doi.org/10.1142/9789812704658_0152
No abstract received.
https://doi.org/10.1142/9789812704658_0153
No abstract received.
https://doi.org/10.1142/9789812704658_0154
The use of Bayesian methods in medical biology and modeling is an approach, which seeks to provide a unified framework within many different image processes. In this work, bayesian models would be presented to illustrate biological phenomena using the Gibbs sampler technique. Finally methods for the estimation of model parameters would be proposed based on the likelihood ration tests.