This book covers different topics on optimal design and operations with particular emphasis on chemical engineering applications. A wide range of optimization methods — deterministic, stochastic, global and hybrid — are considered.
Containing papers presented at the bilateral workshop by British and Lithuanian scientists, the book brings together researchers' contributions from different fields — chemical engineering including reaction and separation processes, food and biological production, as well as business cycle optimization, bankruptcy, protein analysis and bioinformatics.
Sample Chapter(s)
Chapter 1: Hybrid Methods for Optimisation (520 KB)
https://doi.org/10.1142/9789812772954_fmatter
Preface.
Contents.
https://doi.org/10.1142/9789812772954_0001
Computer aided design tools for industrial engineering typically require the use of optimisation. The optimisation problems in industrial engineering are often difficult due to the use of nonlinear and nonconvex models combined with underlying combinatorial features. The result is that no single optimisation procedure is typically suitable for most design tasks. Hybrid procedures are able to make use of the best features of any method while ameliorating the impact of the disadvantages of each method involved. This paper presents an overview of hybrid methods in engineering design. A simple case study is used to illustrate one hybrid optimization procedure.
https://doi.org/10.1142/9789812772954_0002
This paper presents a multi-class data classification approach based on hyper-boxes using a mixed integer linear programming (MILP) model. Comparing with other discriminant classifiers, hyper-boxes are adopted to capture the disjoint regions and define the boundaries of each class so as to minimise the total misclassified samples. Non-overlapping constraints are specified to avoid overlapping of boxes that belong to different classes. In order to improve the training and testing accuracy, an iterative solution approach is presented to assign multi-boxes to single class. Finally, the applicability of the proposed approach is demonstrated through two illustrative examples from machine learning databases. According to the computational results, our approach is competitive in terms of prediction accuracy when comparing with various standard classifiers.
https://doi.org/10.1142/9789812772954_0003
In this paper a template for implementation of parallel branch and bound algorithms is considered. Standard parts of branch and bound algorithms are implemented in the template and only method specific rules should be implemented by the user. The sequential version of the template allows easy testing of different variants of algorithms. Using parallel version of the template the user can obtain parallel programs without actually doing parallel programming. Several parallel global and combinatorial optimization algorithms have been implemented using template and the results are presented.
https://doi.org/10.1142/9789812772954_0004
In this paper, we consider problems related to the implementation of Stochastic Approximation (SA) in technical design, namely, estimation of a stochastic gradient, improvement of convergence, stopping criteria of the algorithm, etc. The accuracy of solution and the termination of the algorithm are considered in a statistical way. We build a method for estimation of confidence interval of the objective function extremum and stopping of the algorithm according to order statistics of objective function values provided during optimization. We give some illustration examples of application of developed approach of SA to the optimal engineering design problems, too.
https://doi.org/10.1142/9789812772954_0005
In this paper the method by a finite sequence of Monte-Carlo sampling estimators has been developed to solve stochastic linear problems. The method is grounded by adaptive regulation of the size of Monte-Carlo samples and the statistical termination procedure, taking into consideration the statistical modeling error. Our approach distinguishes itself by treatment of the accuracy of the solution in a statistical manner, testing the hypothesis of optimality according to statistical criteria, and estimating confidence intervals of the objective and constraint functions. To avoid "jamming" or "zigzagging" solving the problem, we implement the ε–feasible direction approach. The adjustment of sample size, when it is taken inversely proportional to the square of the norm of the Monte-Carlo estimate of the gradient, guarantees the convergence a. s. at a linear rate. The numerical study and examples in practice corroborate the theoretical conclusions and show that the procedures developed make it possible to solve stochastic problems with a sufficient agreeable accuracy by means of the acceptable amount of computations.
https://doi.org/10.1142/9789812772954_0006
Gradient-type algorithms can be linked to algorithms for constructing optimal experimental designs for linear regression models. The asymptotic rate of convergence of these algorithms can be expressed through the asymptotic behaviour of an experimental design construction procedure. One well known gradient-type algorithm is the method of Steepest Descent. Here a generalised version of the Steepest Descent algorithm, with a relaxation coefficient is considered and the rate of convergence of this algorithm is investigated.
https://doi.org/10.1142/9789812772954_0007
We report on an innovative approach to solving Hybrid Flow Shop (HFS) scheduling problems through the combination of existing methods, most of which are simple heuristics. By judiciously combining these heuristics within an evolutionary framework, a higher level heuristic, a Hyper-Scheduler (HS), was devised. It was then, tested on a large array of HFS instances differing not only in input data, but crucially by the objective function used. The results suggest that HS success may well be due to it being successful at exploiting potential synergies between simple heuristics. These results are reported.
https://doi.org/10.1142/9789812772954_0008
This paper considers for the first time, the simultaneous optimisation of configuration, design and operation of hybrid batch distillation/pervaporation processes by considering all possible process structures. The overall problem is formulated as a mixed integer dynamic optimisation (MIDO) problem. The optimisation strategy comprises of an overall economics index that encompasses capital investment, operating costs and production revenues. Furthermore, rigorous dynamic models developed from first principles for distillation and pervaporation are used. A case study for the separation of homogeneous tangent-pinch (acetone-water) mixtures is presented. It is found that fully integrated hybrid configuration is economically favourable when compared to a conventional distillation process, however, this configuration may be more complex to operate and control.
https://doi.org/10.1142/9789812772954_0009
In the modeling of market research data the so-called Gamma-Poisson model is very popular. The model fits the number of purchases of an individual product made by a random consumer. The model presumes that the number of purchases made by random households, in any time interval, follows the negative binomial distribution. The fitting of the Gamma-Poisson model requires the estimation of the mean m and shape parameter k of the negative binomial distribution. Little is known about the optimal estimation of parameters of the Gamma-Poisson model. The primary aim of this paper is to investigate the efficient estimation of these parameters.
https://doi.org/10.1142/9789812772954_0010
This paper is concerned with the search for sequences of DNA bases via the solution of the key equivalence problem. The approach is related to the hardening of soft databases method due to Cohen et al.1 Here, the problem is described in graph theoretic terms. An appropriate optimization model is drawn and solved indirectly. This approach is shown to be effective. Computational results on bioinformatics databases are included.
https://doi.org/10.1142/9789812772954_0011
Emulsion polymerisation is widely used in industry to produce a large range of products. Many important properties of the polymer products are strongly influenced by the particle size distribution (PSD) of the latex. PSD is driven by three major phenomena: particle nucleation, growth and coagulation, each of which strongly interact with each other, present irreversible traits and have widely different time constants, thereby rendering PSD control challenging. In this study a population balance model is used to develop feed policy to attain target PSD. A multi-objective optimisation strategy that targets the individual phenomena of nucleation, growth and coagulation is adopted.
https://doi.org/10.1142/9789812772954_0012
Data parallel algorithms are very common in both optimisation and process modelling problems. The size of problems solved by such algorithms may be significantly increased and absolute computation time may be reduced by using parallel computing. Parallel arrays is C++ library designed to simplify parallelisation of data parallel algorithms, using principle similar to High Performance Fortran. The algorithm must be implemented using special arrays instead of native C/C++ ones. Application of the library to parallelisation of porous media modelling algorithm and image smoothing is also described.
https://doi.org/10.1142/9789812772954_0013
Shape grammars are types of non-linear formal grammars that have been used in a range of design domains such as architecture, industrial product design and PCB design. Graph grammars contain production rules with similar generational properties, but operating on graphs. This paper introduces CAD grammars, which combine qualities from shape and graph grammars, and presents new extensions to the theories that enhance their application in design, modelling and manufacturing. Details about the integration of CAD grammars into automated spatial design systems and standard CAD software are described. The benefits of this approach over traditional shape grammar systems are also demonstrated.
https://doi.org/10.1142/9789812772954_0014
Multidimensional scaling is a technique for visualization of multidimensional data. A difficult global optimization problem should be solved to minimize the error of visualization. Parallel genetic global optimization algorithm for multidimensional scaling is implemented to enable solution of large scale problems in acceptable time. Results of visualization using high performance computer and cluster of personal computers are presented.
https://doi.org/10.1142/9789812772954_0015
Multidimensional scaling is a technique for visualization of multidimensional data. In this paper pharmacological data are visualized using multidimensional scaling for the visual analysis of properties of adrenoceptors (which form a class of G-protein-coupled receptors) and ligands. The aim of visualization is to provide useful information for prediction of structural features of proteins and drug design. For the implementation of a multidimensional scaling technique a difficult global optimization problem should be solved. To attack such a problem a hybrid global optimization method is used, where an evolutionary global search is combined with a local descent.
https://doi.org/10.1142/9789812772954_0016
Multidimensional scaling (MDS) is a prospective technique to the visualization and exploratory analysis of multidimensional data. By means of MDS algorithms a two dimensional representation of a set of points in a high dimensional (original) space can be obtained, where distances between the points in the two dimensional embedding space represent dissimilarity of multidimensional points. The latter normally is measured by the Euclidean distance, although the alternative measures can be advantageous. In the present paper we investigate influence of the choice of dissimilarity measure (distances in the original space) to the visualization results.
https://doi.org/10.1142/9789812772954_0017
A method for correcting interpoint distances before projecting on a lower dimensional space is presented in this paper. The basic idea is to change the distances according to the distribution of distances in high dimensional areas. Such corrections increase the quality of mapping: they distinguish the clusters of data points. The data structure after projection is more precise in this case. The proposed corrections are simple enough; the values of the correction coefficients were calculated for various data dimensionalities.
https://doi.org/10.1142/9789812772954_0018
In financial institutions statistical and artificial intelligence methods have been used for determination of the credit risk classes. Recently, algorithms of the artificial neural networks were often applied, one of them is a self-organizing map (SOM): this is the two-dimensional map of the credit units in the process that is generated by similar characteristics (attributes), however, this process is not specified by network outputs. If in the cluster dominate credit units of one class, and then it is reasonable to use SOMs in forecast of bankruptcy of the company. Here was used the core statistical methodology on prediction for bankruptcy of corporate companies created by Altman so called Z-score model. The factors of bankruptcy were used as input data and Z-score values were used to define clusters of generated SOM. The results of our investigations were presented and they show that SOM is reliable method for bankruptcy prediction.
https://doi.org/10.1142/9789812772954_0019
This paper reviews four methods of estimating the output gap in Lithuania, including Hodrick-Prescott (HP) filter, Prior-Consistent (PC) filter, production function model and a multivariate unobserved components models. Latent variables obtained using Kalman filter. All estimates of output gap show that the economy of Lithuania has been above it's potential level at the end of 2004. The Kalman filter output gap was less volatile, compare to other methods and only from smooth results of Kalman filter it was possible to identify Lithuanian economy business cycle. A long-run potential growth of the Lithuanian economy is estimated at 5.75 per cent. We could not prove that the Kalman filter reduced end-of-sample uncertainty. Kalman filter was always underestimating the output gap, but HP filter tended to overestimate.
https://doi.org/10.1142/9789812772954_0020
Knowledge of the impact of thermal processing in the food industry is crucial in order to deliver high quality safe foods to the consumer. Time Temperature Integrators (TTIs) have been developed as quality control and process exploration tools for processes where use of other thermal sensors is impossible. TTIs are encapsulated enzymatic suspensions with well characterized thermal inactivation kinetics, whose activity can be measured easily before and after processing. From the reduction of the TTI activity it is possible to estimate the inactivation of pathogens and spoilage organisms, as well as nutrients in the product. Although TTIs are currently used in many industries a thorough review of their applicability to evaluate thermal processes has not yet been published. Here, experimental validation of an α-amylase TTI is shown with the intention of accurately characterising the variability of the technique. In an attempt to describe the thermal variability of real food processes the heat and mass transport in typical food processes where TTIs might be used were simulated using CFD. Monte Carlo simulations to study the effect of (i) process variability and (ii) the measurement variability inherent within TTI response. Results indicate that TTIs can be used both to validate thermal processes; and as a process exploration tool. In the latter form, they can be used to derive information about variation, although a larger number of TTIs would be required.
https://doi.org/10.1142/9789812772954_0021
A high quality deflection yoke (DY) is one of the most important factors for a high quality monitor. The role of the deflection yoke is to deflect electron beams in horizontal and vertical directions. If the magnetic field is formed incorrectly, misconvergence of beams may occur resulting in blurred image on the screen of the monitor. The magnetic field of DY may be corrected by sticking one or several ferroelastic shunts on the inside surface of the deflection yoke. Also some secondary balance parameters need to be controlled. Because of the complexity of the process it is not easy to determine how to stick the shunt. Therefore, two optimization methods were used for optimal tuning shunts position finding. This paper present research results and its application in industry.
https://doi.org/10.1142/9789812772954_0022
Extractive fermentation is based on the removal of inhibitory compounds from the culture broth by an extractive agent allowing the process intensification. In this work, a rigorous approach for the description of extractive fermentation for ethanol production was utilized. With this aim, fermentation kinetics models were coupled with models describing liquid-liquid equilibrium in order to simulate the continuous culture. A shortcut method based on the principles of thermodynamic-topological analysis is proposed for studying the behaviour of the process. The feasibility of different sets of operating parameters was examined. Using the mentioned tools, a general strategy of optimization was formulated.
https://doi.org/10.1142/9789812772954_0023
A model based control approach, known as Generic Model Control (GMC), was analyzed and proposed to the regulation of the specific growth rate of autotrophic biomass in wastewater treatment plant (WWTP). In GMC theory, the nonlinear process model is directly embedded in the control law. One of the most attractive features is that this control scheme solves an optimization problem in only one step of calculation. In aid of the complex WWTP simulator, economy efficient was analyzed if autotrophic biomass specific growth rate set point is increased at night and decreased at day time (night and day electrical energy tariffs).
Sample Chapter(s)
Chapter 1: Hybrid Methods for Optimisation (520k)