![]() |
FLINS, originally an acronym for Fuzzy Logic and Intelligent Technologies in Nuclear Science, is now extended to Applied Artificial Intelligence for Applied Research. The contributions to the seventh in the series of FLINS conferences contained in this volume cover state-of-the-art research and development in applied artificial intelligence for applied research in general and for power/nuclear engineering in particular.
https://doi.org/10.1142/9789812774118_fmatter
FOREWORD.
CONTENTS.
https://doi.org/10.1142/9789812774118_0001
No abstract received.
https://doi.org/10.1142/9789812774118_0002
This presentation addresses the problems of realizing human-friendly man-machine interaction in service robotic environment with emphasis on learning capability. After briefly reviewing the issues of human-robot interaction and various learning techniques from engineering point of view, we report our experiences in case studies where some learning techniques are successfully implemented in service robotic environment and discuss open issues of learning systems such as adaptivity and life-long learning capability.
https://doi.org/10.1142/9789812774118_0003
No abstract received.
https://doi.org/10.1142/9789812774118_0004
No abstract received.
https://doi.org/10.1142/9789812774118_0005
No abstract received.
https://doi.org/10.1142/9789812774118_0006
No abstract received.
https://doi.org/10.1142/9789812774118_0007
In this paper we present an automatic evaluation tool for fuzzy first order logic formulae. Since different logics can be considered, we allow for such formulae the appearance of syntactic modifiers, in such a way that our tool is designed not only to evaluate formulae in existing logic, but also to evaluate properties in any other logic framework given by the user. Such generalization is performed using Haskell, a functional programming language.
https://doi.org/10.1142/9789812774118_0008
In this paper, we assume that computing with words is equivalent to computation by field. A field is generated by a word or a sentence as sources. Computation by field means the research of the intensity of the sources of fields (holography) and construction by the intensity of the sources of the fields (fields modelling process). A field is a map between points in the reference space and values. For example, in a library, the reference space would be where the documents are located. At any given word, we define the field as a map of the position of the documents in the library and the number of the occurrences (values) of the word in the document. The word or source is located in one point of the reference space (query) but the field (answer) can be located in any part of the reference.
Complex strings of words (structured query) generate a complex field or complex answer by which structure is obtained by the superposition of the fields of the words as sources with different intensity. Any field is a vector in the space of the documents. A set of basic fields is a vector space and form a concept. We break the traditional idea that a concept is one word in the conceptual map. Internal structure (entanglement) of the concept is the relation of dependence among the basic fields. Geometric image of the concept X and the field R in Zadeh's rule “X isr R” is given. Fields can be fuzzy sets where values are the membership values of the fuzzy set. The ambiguous word is the source (query) of the fuzzy set (field or answer).
https://doi.org/10.1142/9789812774118_0009
In this paper we introduce a set of tuning operators that allow us to implement context adaptation of fuzzy rule-based systems while keeping semantics and interpretability. The idea is to achieve context adaptation by starting from a (possibly generic) fuzzy system and adjusting one or more its components, such as membership function shape, fuzzy set support, distribution of membership functions, etc. We make use of a genetic optimization process to appropriately choose the operator parameters. Finally, we show the application of the proposed operators to Mamdani fuzzy systems.
https://doi.org/10.1142/9789812774118_0010
Buckley and Qu proposed a method to solve systems of linear fuzzy equations. Basically, in their method the solutions of all systems of linear crisp equations formed by the α-levels are calculated. We proposed a new method for solving systems of linear fuzzy equations based on a practical algorithm using parametric functions in which the variables are given by the fuzzy coefficients of the system. By observing the monotonicity of the parametric functions in each variable, i.e. each fuzzy coefficient in the system, we improve the algorithm by calculating less parametric functions and less evaluations of these parametric functions. We show that our algorithm is much more efficient than the method of Buckley and Qu.
https://doi.org/10.1142/9789812774118_0011
This paper gives an overview of numerical strategies for the implementation of the fuzzy finite element method for structural dynamic analysis. It is shown how the deterministic finite element procedure is translated to a fuzzy counterpart. Some general solution strategies for the underlying interval problem are reviewed. The specific requirements for the implementation of the fuzzy method for structural dynamic analysis are then discussed based on two typical dynamic analysis procedures, i.e., the eigenvalue analysis and the frequency response analysis.
https://doi.org/10.1142/9789812774118_0012
This paper establishes a novel environmental/economic load dispatch model in which the cost and emission functions are with uncertain coefficients. To solve the problem, this study first converts the model into a single objective optimization problem by using a novel weighting ideal point method. A hybrid genetic algorithm with quasi-simplex techniques is then developed to solve the corresponding single objective optimization problem. Fuzzy number ranking method is also applied to compare all fuzzy function values of different points for the single objective function. The test results have convinced the validity of the model and effectiveness of the algorithm used for achieving a solution for the economic dispatch problem.
https://doi.org/10.1142/9789812774118_0013
In this paper, we introduce a new family of approaches to fuse inconsistent logic-based knowledge sources. They accommodate two preference criteria to arbitrate between conflicting information: namely, the minimisation of the number of contradicted formulas and the minimisation of the number of the different terms that are involved in those formulas.
https://doi.org/10.1142/9789812774118_0014
Based on analysis of a knowledge management fuzzy model, the judgment thinking process is discussed in this paper. Then, fuzzy models of three thinking forms based on intelligent information disposal are established for abstract thinking, fuzzy pattern recognition model of imagination thinking, and fuzzy state equation model of intuitive thinking. Finally, a fuzzy integrated judgment model can be established on the basis of three thinking forms and different fuzzy models by fuzzy large-scale system modeling techniques.
https://doi.org/10.1142/9789812774118_0015
This paper proposes a semantic assistant method to help the grammar parser choosing the correct parsing structure. In this method, the understandability of the phrases generated in the process of parsing will be checked by the means of checking the weak-consistency of their semantic knowledge in a background knowledge base learned from tree bank corpus. The parsing probabilities of the understandable phrases are improved to near 1 such that the correct parsing structure can eventually obtains the largest probability. The experiment results show that the method can obviously improve the correctness of the grammar parser.
https://doi.org/10.1142/9789812774118_0016
In order to directly manipulates linguistic values of linguistic variable Truth, on the one hand, the aim of this paper is to construct algebra structures to model linguistic domain of Truth. On the other hand, due to reasoning based on logic system has reached high confidence, logic algebra systems (Lukasiewicz product algebras) are constructed. A method of linguistic reasoning directly using Truth is developed. Examples of this paper show that the conclusions of this paper coincide with our intuition in understanding linguistic values Truth.
https://doi.org/10.1142/9789812774118_0017
In this paper, we present mathematical properties of lattice implication algebra L6 with six linguistic terms: true, false, more true, more false, less true, less false. We also discuss the properties of propositional logic L6P(X) and present the structure of generalized literals and the standardization of formulae.
https://doi.org/10.1142/9789812774118_0018
Using Kripke-style semantics, a kind of qualitative fuzzy logic system that can reflect the “elastic” of fuzzy proposition is proposed. The truth value of fuzzy proposition is not a singleton but depends on the context in the real world. Consider a fuzzy proposition one will choose the equivalence relation to get different classes. Based on an equivalence class and its weight, a qualitative fuzzy proposition can hold. Some properties of this system are also discussed. Considering the weight of different class, a method to aggregate the losing information is presented. With the alternation of the possible world and its weight, a dynamic resolution method is introduced.
https://doi.org/10.1142/9789812774118_0019
In this paper, we introduce the properties of annihilators and α – subsets. Then we prove that the annihilator of a lattice H implication algebra is a prime LI-ideal. In the end, we define “ ∧ ”, “ ∨ ”, “ α * ” and “ ⇒ ” on Σ (L) which is the set of all LI-ideals and prove that (Σ (L), ∨, ∧, α *, ⇒) is a lattice implication algebra.
https://doi.org/10.1142/9789812774118_0020
The notion of multi-fold fuzzy implicative filter is introduced in residuated lattice implication algebras. The properties of multi-fold fuzzy implicative filter are investigated. The relations between multi-fold fuzzy implicative filter and filter, between multi-fold fuzzy implicative filter and fuzzy filter, between the lattice implication homomorphism image of multi- fold fuzzy implicative filter and the multi-fold fuzzy implicative filter are discussed, respectively.
https://doi.org/10.1142/9789812774118_0021
Pseudo-difference posets have been introduced as a new quantum logic structure. In this paper, we show that the axiom (PD6) in the definition of pseudo-difference posets is not independent, and hence obtain some simpler axiom systems. We show that pseudo-difference posets are a noncommutative generalization of D-posets as if pseudoeffect algebras to effect algebras. The PD-algebra is introduced as generalizations of both a pseudo-difference posets and a D-algebra, in which the partial order is not assumed.
https://doi.org/10.1142/9789812774118_0022
This paper presents a rigorous proof to the existence of chaos in the sense of Li-Yorke in a spatiotemporal chaotic system, i.e., a coupled map lattice. An explicit formula for choosing parameters of the coupled map lattice to guarantee its chaos is derived. It therefore lays a theoretical foundation for further study and exploitation of this spatiotemporal chaotic system. Simulation shows the correctness of the theoretical investigation.
https://doi.org/10.1142/9789812774118_0023
One of the important problems of the theory of IF sets in the creation of probability theory. Recently the family of IF-events was embedded in an MV-algebra, hence many results of the probability theory on MV-algebras can be applied. Of course, the mentioned MV-algebra has some special properties. The aim of the paper is the description of the basic notion of the probability theory on the special MV-algebra.
https://doi.org/10.1142/9789812774118_0024
In this paper, we suggest a new approach to test the reliability of the interior-outer-set model for calculating fuzzy probabilities. With a sample drawn from a population, we use the model to obtain a possibility distribution on a probability universe with respect to a histogram interval. Then, with N samples drawn from the same population, we obtain N histogram estimates that an event occurs in the same interval. Because the distribution constructed by the histogram estimates is similar to the possibility distribution, according to the consistency principle of possibility/probability, we infer that the model is basically reliable.
https://doi.org/10.1142/9789812774118_0025
A novel multi-scale Gaussian processes (MGP) model is proposed for regression and prediction. Motivated by the ideas of multi-scale representations in the wavelet theory, in the new model a Gaussian process is represented at a scale by a linear basis that is composed of a scale function and its different translations. Finally the distribution of the targets can be obtained at different scales. Compared with the standard GP model, the MGP model can control its complexity conveniently just by adjusting the scale parameter. So it can trade-off the generalization ability and the empirical risk rapidly. Experiments show that the performance of MGP is significantly better than GP if appropriate scales are chosen.
https://doi.org/10.1142/9789812774118_0026
Since subjective chose could cause the loss of valuable original information, statistics method is employed to deal with multi-variable problem. After normalization, original variables are reduced to several independent synthetic variables on which evaluation is based. Principal Component Analysis provides a good example for this. But the real data calculation shows that the result of Principal Component Analysis is not always complied with the real situation. Sometimes, it can be totally messed up. Some problems exist regarding to this classification. They are as follows: (1) The discrimination ability of PCA is limited, (2) For those samples with big variables, PCA losses its ability of discrimination, (3) When the value of variable increases, on the contrary, class level decreases, (4) The same samples, while different classifications, (5) Variables change a lot, while classification keeps unchanged, (6) While variables change arbitrarily, there are only two different classifications, (7) The position change of variables causes the change of classification, (8) The change of a variable causes the change of the classification. These problems are caused by the nature of Principal Component Analysis itself.
https://doi.org/10.1142/9789812774118_0027
In order to tolerate partial truth due to imprecise or incomplete data that may often exist in massive databases, or due to a very tiny insignificance of tuple differences in a huge volume of data, the notion of functional dependency with degree of satisfaction, denoted as (FD)d, has been proposed in [5], along with Armstrong-like properties and the concept of minimal set. This paper discusses and presents several optimization strategies and inference properties for discovering the minimal set of (FD)d and incorporates them into the corresponding algorithm so as to improve the computational efficiency.
https://doi.org/10.1142/9789812774118_0028
The principle of local structure mapping in analogy reasoning is introduced and applied to the problem of learning from data. A conceptually straightforward approach is presented for the classification problem on real-valued data. An experiment on Iris data set is provided to illustrate that the principle of local structure mapping can be an effective mechanism and viewpoint for tasks of learning from data as well as analogy reasoning.
https://doi.org/10.1142/9789812774118_0029
This paper deals with the problem of a kind of weak ratio rules for forecasting upper bound, namely upper bound weak ratio rules. Upper bound weak ratio rules is parallel to Jiang's weak ratio rules and has such a reasoning meaning, that if the spending by a customer on bread is 2, then that on butter is at most 3. By discussing the mathematical model of upper bound weak ratio rules problem, we come to the conclusion that upper bound weak ratio rules are also a generalization of Boolean association rules and that every upper bound weak ratio rule is supported by a Boolean association rule. We propose an algorithm for mining an important subset of upper bound weak ratio rules and construct an upper bound weak ratio rule uncertainty reasoning method. Finally an example is given to show how to apply upper bound weak ratio rules to reconstructing lost data, to forecasting and to detecting outliers.
https://doi.org/10.1142/9789812774118_0030
In this study, we present a clustering approach that automatically determines the number of clusters before starting the actual clustering process. This is achieved by first running a multi-objective genetic algorithm on a sample of the given dataset to find the set of alternative solutions for a given range. Then, we apply cluster validity indexes to find the most appropriate number of clusters. Finally, we run CURE to do the actual clustering by feeding the determined number of clusters as input. The reported test results demonstrate the applicability and effectiveness of the proposed approach.
https://doi.org/10.1142/9789812774118_0031
This paper presents a method for reducing linguistic terms in sensory evaluation using the principle of rough set theory. Using this method, inconsistent and insensitive evaluation terms are removed and then the related sensory evaluation work can be simplified. The effectiveness of this method has been validated through an example in fabric hand evaluation.
https://doi.org/10.1142/9789812774118_0032
The paper provides a survey of methods for logical rules extraction from data, and draws attention to the specificity of rules extraction by means of artificial neural networks. The importance of rules extraction from data for real-world applications was illustrated on a case study with EEG data, in which 5 rules extraction methods were used, including one ANN-based method.
https://doi.org/10.1142/9789812774118_0033
The paper introduces the concept of continuous-time dynamic neural units with adaptable input and state variable time delays (TmDNU – Time Delay Neural Units). Two types of TmDNUs are proposed as they introduce adaptable time delays either into the neural inputs or both the neural inputs and the neural unit state variable. Robust capabilities of TmDNU for time delay identification and approximation of linear systems with dynamics of higher orders is shown for standalone single-input TmDNUs with linear neural output function (somatic operation). A simple dynamic BackPropagation learning algorithm is shown for continuous-time adaptation of the time delay parameters. The units also represent elements for building novel artificial neural network architectures.
https://doi.org/10.1142/9789812774118_0034
In this contribution a comparative analysis of the evaluation for the two Soft Computing methods Multilayer Perceptron (MLP) and Takagi Sugeno model (TSM) will be described. Above all the developed characteristics of the linear connection between the input values and output values are to be compared with regard to the model quality. In addition, a correlation with the characteristic of the linear relationship of two characteristics of the process data is derived.
https://doi.org/10.1142/9789812774118_0035
As a novel multi-objective optimization technique, multi-objective particle swarm optimization (MOPSO) has gained much attention and some applications during the past decade. In order to enhance the performance of MOPSO on the diversity and the convergence of the solutions, this paper introduce the new methods to update the personal guide and select the global guide for each swarm members from the particle set and the Pareto front set. In order to validate the proposed method, some simulation results and comparisons with respect to several multi-objective evolutionary algorithms and MOPSO based algorithm which are representative of the state-of-the-art in this area are presented. The article concludes with a discussion of the obtained results as well as ideas for further research.
https://doi.org/10.1142/9789812774118_0036
It is one of the most important strategies to make management according to the classification of customers. A new method to classify customers is presented in this paper. Firstly we try to find the key background information by simplifying the decision table based on the rough set theory; secondly we get to know the profit by analyzing the sale and the cost of customers; and finally, we get the decision rules on the principle of maximum profit. As such, we could reason out to which class that a new customer belongs and select a good way to serve him, thus achieving the optimal economic benefit for enterprises.
https://doi.org/10.1142/9789812774118_0037
Customer classification is one of the major tasks in customer relationship management. Customers have both static characteristics and dynamic behavioral features. To apply both kinds of data to conduct comprehensive analysis can enhance the reasonability of customer classification. In this paper, customer dynamic data is clustered using a hybrid genetic algorithm and then is combined with customer static data to give reasonable customer segmentation by using neural network technique. A novel classification method which considers both the static and dynamic data of customers is proposed. Applying the proposed method in a bank's datasets can obviously improve the accuracy of customer classification comparing with traditional methods where only static data is used.
https://doi.org/10.1142/9789812774118_0038
In order to reflect the characteristic type knowledge and mine data in the credit market, Two-stage classification method is adopted, and fuzzy clustering analysis is presented. First of all, the paper carries on attribute normalization of multi-factors which influence banks credit, computes fuzzy analogical relation coefficient, sets the threshold level to α by considering the competition and social credit risks state in the credit market, and selects borrowers through transfer closure algorithm. Second, it makes initial classification on samples according to the coefficient characteristic of fuzzy relation; third, it improves fuzzy clustering method and its algorithm. Finally the paper study a case about knowledge of credit mining in the financial market.
https://doi.org/10.1142/9789812774118_0039
In Support Vector Machines (SVM's), a non-linear model is estimated based on solving a Quadratic Programming (QP) problem. Based on work [1] we investigate the quantifying of econometric structural model parameters of inflation in Slovak economics. The theory of classical Phillips curve [7] is used to specify a structural model of inflation. We provide the fit of the models based on econometric approach for the inflation over the period 1993-2003 in the Slovak Republic, and use them as a tool to compare their approximation and forecasting abilities with those obtained using SVM's method. Some methodological contributions are made for SVM implementations to the causal econometric modelling. The SVM's methodology is extended for economic time series forecasting.
https://doi.org/10.1142/9789812774118_0040
In this paper three models derived using Machine Learning techniques (Support Vector Machines, Decision Trees and Shadow Clustering) are compared for approximating the reliability of real complex networks, such as for water supply, electric power or gas distribution systems or telephone systems, using different reliability criteria.
https://doi.org/10.1142/9789812774118_0041
Based on the works [8], [16] a fuzzy time series model is proposed and applied to predict chaotic financial process. The general methodological framework of classical and fuzzy modelling of economic time series is considered. A complete fuzzy time series modelling approach is proposed. To generate fuzzy rules from data, the neural network with Supervised Competitive Learning (SCL)-based product-space clustering is used.
https://doi.org/10.1142/9789812774118_0042
The theory of fuzzy logics founded by Zadeh in 1965 has been proven to be useful for dealing with uncertain and vague information. The grey theory that was first proposed by Deng (1982) avoids the inherent defects of conventional statistical methods and only requires a limited amount of data to estimate the behavior of unknown systems. In this paper, we use the fuzzy set theory and the grey theory to develop an efficient method to predict the cash flows of an investment. The cash flows obtained are used in present worth analysis to determine if the investment is acceptable. Illustrative examples are given.
https://doi.org/10.1142/9789812774118_0043
This paper presents an extended Branch-and-Bound algorithm for solving fuzzy linear bilevel programming problems. In a fuzzy bilevel programming model, the leader attempts to optimize his/her fuzzy objective with a consideration of overall satisfaction, and the follower tries to find an optimized strategy, under himself fuzzy objective, according to each of possible decisions made by the leader. This paper first proposes a new solution concept for fuzzy linear bilevel programming. It then presents a fuzzy number based extended Branch-and-bound algorithm for solving fuzzy linear bilevel programming problems.
https://doi.org/10.1142/9789812774118_0044
In this paper, we consider an interactive goal programming approach for fuzzy multi objective linear programming application to aggregate production planning problems. Our aim is to determine the overall degree of decision maker satisfaction with the multiple fuzzy goal values and to give the exactly satisfactory solution results for decision maker in illustrative example.
https://doi.org/10.1142/9789812774118_0045
In this paper, we develop a linear programming technique for multidimensional analysis of preference (LINMAP) method for solving multiattribute group decision making (MAGDM) problems with preference information on alternatives in fuzzy environment. Our aim is to develop a fuzzy LINMAP model to evaluate and to select of knowledge management (KM) tools. KM decision-making problems are often associated with evaluation of alternative KM tools under multiple objectives and multiple criteria.
https://doi.org/10.1142/9789812774118_0046
This paper outlines, first, a compromise linear programming (LP) having fuzzy objective function coefficients (CLPFOFC) and thereafter, a real-world industrial problem for product-mix selection involving 29 constraints and 8 variables is solved using CLPFOFC. This problem occurs in production planning management in which a decision-maker (DM) plays a pivotal role in making decision under a highly fuzzy environment. Authors have tried to find a solution that is flexible as well as robust for the DM to make an electic decision under real-time fuzzy environment.
https://doi.org/10.1142/9789812774118_0047
A supply chain is a network of suppliers, manufacturing plants, warehouses, and distribution channels organized to acquire raw materials, convert these raw materials to finished products, and distribute these products to customers. Linear Programming is a wide used technique to optimize Supply Chain decisions. In the crisp case, every parameter value is certain whereas in real life, the data is rather fuzzy than crisp. The fuzzy set theory has the capability of modeling the problems with vague information. In this paper, a fuzzy optimization model for supply chain problems will be developed under vague information. A numerical example will be given to show the usability of the fuzzy model.
https://doi.org/10.1142/9789812774118_0048
Nowadays the selection of suppliers in supply chain management (SCM) becomes more and more important, and then, the problems of which evaluation method can be used in this area are addressed. This paper proposes a worst point-based multi-objectives suppliers evaluation method, the major thoughts are: First choosing n different suppliers evaluation criterion to be an n-dimensions space, every suppliers information is a point in this space. Then the Euclid distance between these points can be calculated, and the suppliers by the distance of being the best or worst are sorted out.
https://doi.org/10.1142/9789812774118_0049
RFID (Radio Frequency Identification) is an Auto-ID technology which uses radio waves to automatically identify the individual items. Using RFID systems for identifying and tracking objects, it is possible to improve the performance of a supply chain process in terms of operational efficiency, accuracy and security. RFID systems can be implemented in different levels like item, case or pallet. These various applications create different impacts in supply chain processes. RFID investments are very important strategic decisions so they require a comprehensive evaluation process. The tangible and intangible benefits should be integrated to evaluate them. The fuzzy cognitive mapping (FCM) is a suitable tool to model causal relations in a non-hierarchical manner for an RFID investment evaluation. In this paper, the FCM method is used to measure the impact of RFID investment in a supply chain process.
https://doi.org/10.1142/9789812774118_0050
Since the 1960's many authors accepted the triple constraints (time, cost, specification) as a standard measure of success and this still appears to be extremely important in evaluating the success of ICT (information communication technology) projects. However, an ICT project cannot always be seen as a complete success or a complete failure. Moreover, the parties involved may perceive the terms “success” or “failure” differently.
A quasi-experiment (gaming) was developed in order to determine the measures for success used by the different parties involved to judge an ICT project. The results of this quasi experiment were analysed using aggregation theory and validated by probabilistic feature models. In general the figures do not contradict.
This research indicates that the impact of the triple constraints on the judgement of success is rather small. Other criteria, as there are user happiness and financial or commercial success are far more important. Surprisingly, whether or not a project was able to meet the predefined specifications was of little importance for the appreciation of the project's success.
https://doi.org/10.1142/9789812774118_0051
Axiomatic Design (AD) is a guide for understanding design problems, while establishing a scientific foundation to provide a fundamental basis for the creation of products and processes. The most important concept in axiomatic design is the existence of the design axioms. AD has two design axioms: independence axiom and information axiom. The independence axiom maintains the independence of functional requirements, and information axiom proposes the selection of the best alternative that has minimum information. In this study AD is proposed for multi-attribute comparison of mobile phones regarding their ergonomic design.
https://doi.org/10.1142/9789812774118_0052
Most decision-making problems deal with uncertain and imprecise data so conventional approaches cannot be effective to find the best solution. To cope with this uncertainty, fuzzy set theory has been developed as an effective mathematical algebra under vague environment. When the system involves human subjectivity, fuzzy algebra provides a mathematical framework for integrating imprecision and vagueness into the decision making models. The main subject of this study is a fuzzy outranking model, and a numerical example of a facility location selection problem with fuzzy data is considered using this model.
https://doi.org/10.1142/9789812774118_0053
Traditionally, when evaluating supplier performance, companies have considered factors such as price, quality, flexibility etc. However, with environmental pressures increasing, many companies have begun to consider environmental issues and the measurement of their suppliers' environmental performance. This paper presents a performance evaluation model based on a multi-criteria decision-making method, known as VIKOR, for measuring the supplier environmental performance. The original VIKOR method has been proposed to identify compromise solutions, by providing a maximum group utility for the majority and a minimum of an individual regret for the opponent. In its actual setting, the method treats exact values for the assessment of the alternatives, which can be quite restrictive with unquantifiable criteria. This will be true especially if the evaluation is made by means of linguistic terms. For this reason we extend the VIKOR method so as to process such data and to provide a more comprehensive evaluation in a fuzzy environment. The extended method is used in a real industrial application.
https://doi.org/10.1142/9789812774118_0054
In multiple attribute decision-making (MADM) problems, a decision maker (DM) is often faced with the problem of selecting or ranking alternatives associated conflicting attributes. In this paper, a MADM with fuzzy pairwise information is used to solve human resource flexibility problem. In many manufacturing systems human resources are the most expensive, but also the most flexible factors. Therefore, the optimal utilization of human resources is an important success factor contributing to long-term competitiveness.
https://doi.org/10.1142/9789812774118_0055
This paper presents a combined fuzzy analytic hierarchy process (AHP) and fuzzy goal programming (GP) to determine the preferred compromise solution for Enterprise Resource Planning (ERP) training package selection in terms of value creation by multiple objectives. The problem is formulated to include six primary goals: maximize financial benefits, maximize effective software utilisation, maximize employee satisfaction, minimize cost, minimize training duration and minimize risk of operation. Fuzzy AHP is used to specify judgments about the relative importance of each goal. A case study performed for a small manufacturing company running ERP software training is included to demonstrate the effectiveness of the proposed model.
https://doi.org/10.1142/9789812774118_0056
The media sector is a sector which encompasses the creation, modification, transfer and distribution of media content for the purpose of mass consumption; therefore it is a very active and, when managed properly, a very effective sector. In this paper the three management methods MbI, MbO, and MbV are evaluated and the most adequate management method for the media company A is determined using Fuzzy Analytic Hierarchy Process (FAHP). This study shows the power of FAHP in capturing experts' knowledge.
https://doi.org/10.1142/9789812774118_0057
Relational capital (RC) is a sub-dimension of the intellectual capital which is the sum of all assets that arrange and manage the firm's relations with the environment. It contains the relations with outside stakeholders (ie. customers, shareholders, suppliers and rivals, the state, governmental institutions and society). Although the most important component of RC is customer relations, it is not the only one to be taken into consideration. Measuring the RC is related to how the environment perceives the firm. To control and manage this perception, the companies must measure it first. This study aims at defining a methodology to improve the quality of prioritization of RC measurement indicators under uncertain conditions. To do so, a methodology based on the extent fuzzy analytic hierarchy process (AHP) is applied.
Within the model, main attributes, their sub-attributes and measurement indicators are defined. To define the priority of each indicator, preferences of experts are gathered using a pair-wise comparison based questionnaire.
https://doi.org/10.1142/9789812774118_0058
Decision making within the real-world inevitably includes the consideration of evidence based on several criteria, rather than a preferred single criterion. Solving a multi-criteria decision problem offers the decision maker a recommendation, in terms of the best decision alternatives. Within the framework of this article we are attempt to reach a new method in making the comparison matrices in AHP approach to consider some aspects of uncertainties in process of multi criteria decision making. To do this, rules and logic of intuitionistic fuzzy is applied. We provide numerical illustrations of using four major criteria, namely population, age of construction, type of construction and mean number of floors in a census block of an urban region. The result is earthquake vulnerability map which presented with two raster layers, where the degree of membership and the degree of non-membership are the values of each layer so the regions with high membership degree in first map and low membership degree in second map can be determined as high vulnerable regions with more reliability degree.
https://doi.org/10.1142/9789812774118_0059
The consensus process in Group Decision Making (GDM) problems helps to achieve solutions that are shared by the different experts involved in such problems. Due to the fact that in GDM problems different experts take part in the decision process is common that they need to express their information in different domains 5,2. In this contribution we focus on GDM problems defined in heterogeneous contexts with numerical, linguistic and interval valued information. And our aim is to define a consensus model that includes an Advice Generator to assist the experts in the consensus reaching process of GDM problems with heterogeneous preference relations. This model will provide two important improvements: (i) Firstly, its ability to cope with group decision-making problems with heterogeneous preference relations, and, (ii) secondly, the figure of the moderator, traditionally presents in the consensus reaching process, is replaced by an advice generator, and in such a way, the whole group decision-making process can be easily automated.
https://doi.org/10.1142/9789812774118_0060
Performance appraisal is a process used for some firms in order to evaluate the efficiency and productivity of their employees for planning their promotion policy. Initially this process was carried out just by the executive staff, but recently it has evolved to an evaluation process based on the opinion of different reviewers, supervisors, collaborators, clients and the employee himself (360-degree method). In such a evaluation process the reviewers evaluate some indicators related to the employee performance appraisal. These indicators are usually subjective and qualitative in nature that implies vagueness and uncertainty in their assessment. However, most of performance appraisal models force reviewers to provide their assessments about the indicators in a unique precise quantitative domain. We consider this obligation drives to a lack of precision in the final results, so in this paper we propose a linguistic evaluation framework to model qualitative information and manage its uncertainty. Additionally due to the fact that there are different sets of reviewers taking part in the evaluation process that have a different knowledge about the evaluated employee it seems suitable to offer a flexible framework in which different reviewers can express their assessments in different linguistic domains according to their knowledge. The final aim is to compute a global evaluation for each employee, that can be used by the management team to make their decisions regarding their incentive and promotion policy.
https://doi.org/10.1142/9789812774118_0061
In Group Decision Making, the expression of preferences is often a very difficult task for the experts, specially in decision problems with a high number of alternatives. The problem is increased when they are asked to give their preferences in the form of preference relations: although preference relations have a very high level of expressivity and they present good properties that allow to operate with them easily, the amount of preference values that the experts are required to give increases exponentially. This usually leads to situations where the expert is not capable of properly express all his/her preferences in a consistent way (that is, without contradiction), so finally the information provided can easily be either inconsistent or incomplete (when the expert prefers not to give some particular preference values). In this paper we develop a transitivity based support system to aid experts to express their preferences (in the form of preference relations) in a more consistent way. The system works interactively with the expert making recommendations for the preference values that the expert have not yet expressed. Those recommendations are computed trying to maintain the consistency level of the expert as high as possible.
https://doi.org/10.1142/9789812774118_0062
In this paper, a model for decision-making with linguistic information is discussed based on uncertainty reasoning in the framework of lattice-valued logic through an example. In this model, decision-making process is treated as an uncertainty reasoning problem, in which decision-maker's background knowledge about the problem at hand and consultancy experts' assessments on alternatives are regarded as the antecedents of the uncertainty reasoning, the final decision is taken as the conclusion of the uncertainty reasoning, respectively.
https://doi.org/10.1142/9789812774118_0063
Understanding a situation requires integrating many pieces of information which can be obtained by a group of data collectors from multiple data sources. Uncertainty is involved in situation assessment. How to integrate multi-source multi-member uncertain information to derive situation awareness is an important issue in supporting decision making for crisis problems. The study focuses on how uncertain situation information is presented, integrated and finally how situation awareness information is derived. A multi-sources team information integration approach is developed in the study to support a team's assessment for a situation in an uncertain environment. A numerical example is then shown for illustrating the proposed approach.
https://doi.org/10.1142/9789812774118_0064
Most of the work about flowshop problems assumes that the problem data are known exactly at the advance or the common approach to the treatment of the uncertainties in the problem is use of probabilistic models. However, the evaluation and optimization of probabilistic model is computationally expensive and the application of the probabilistic model is rational only when the descriptions of the uncertain parameters are available from the historical data. In this paper we deal with a permutation fiowshop problem with fuzzy processing times. First we explain how to compute start and finish time of each operation on related machines for a given sequence of jobs using fuzzy arithmetic. Next we used a fuzzy ranking method in order to select the best schedule with minimum fuzzy makespan. We proposed an ant colony optimization algorithm for generating and finding good (near optimal) schedules.
https://doi.org/10.1142/9789812774118_0065
Time dependent vehicle routing problem is a vehicle routing problem in which travel costs along the network are dependent upon the time of day during which travel is to be carried out. Most of the models for vehicle routing reported in the literature assume constant and deterministic travel times. This paper describes a route construction method for time dependent vehicle routing problem with fuzzy traveling times according to different traffic conditions.
https://doi.org/10.1142/9789812774118_0066
A good transport schedule plan may be discomfited when a transportation accident happens. In order to get an adjusted transport schedule, we put forward a model for the vehicle routing problem with stochastic in contingency, and discuss the algorithm by combining stochastic simulation with genetic algorithm. We verify this algorithm by applying it to a specific case, and eventually reach the conclusion that it is better than the ordinary way.
https://doi.org/10.1142/9789812774118_0067
A web data extraction model based on HTML or XML Web pages is provided. Firstly, read the Web document from the web server with STOCK, and check format of the Web document, transform the existing HTML web page into XML or XHTML (a subset of XML); secondly, an “operation” on a Web page can generate series of XML documents, integrating these documents will lead to data storing; thirdly, the absolute path in Xpath and the anchors can extract interest data with tools of XML data format; finally, retrieve the data and construct XML output, display the inquiry result on the browser. The result show the implementing Web data extract with the model is effect, but its limitations and defects is existed, an improved semantic web data extraction model is provided.
https://doi.org/10.1142/9789812774118_0068
Since most of the companies prefer outsourcing for e-activities, the selection of e-service provider becomes a crucial issue for those companies. E-service evaluation is a complex problem in which many qualitative attributes must be considered. These kinds of attributes make the evaluation process hard and vague. Cost-benefit analyses applied to various areas are usually based on the data under certainty or risk. In case of uncertain, vague, and/or linguistic data, the fuzzy set theory can be used to handle the analysis. This paper presents the evaluation and selection process for e-service provider alternatives. Many main and sub-attributes are considered in the evaluation. The e-service providers are evaluated and prioritized using a fuzzy multi-attribute group decision-making method. This method first defines group consistency and inconsistency indices based on preferences to alternatives given by decision makers and construct a linear programming decision model based on the distance of each alternative to a fuzzy positive ideal solution which is unknown. Then the fuzzy positive ideal solution and the weights of attributes are estimated using the new decision model based on the group consistency and inconsistency indices. Finally, the distance of each alternative to the fuzzy positive ideal solution is calculated to determine the ranking order of all alternatives.
https://doi.org/10.1142/9789812774118_0069
More and more websites have started to apply intelligent techniques to conduct effective and flexible operations and provide high-quality and personalized online services. One of the explicit manifestations of website intelligence is that web-based information flow directs and guides a related human flow. This paper chooses Sino-Australian as a case to explore how information flow directs a study-abroad human flow through providing intelligent e-services. An evaluation system with the case is conducted to prove the rationality of the directing function and process.
https://doi.org/10.1142/9789812774118_0070
The control problem of the web-advertising income is discussed in this paper. Firstly, the evaluation index system of the web advertisement and the evaluation model of the neural network are built. Secondly, in order to effectively control the factors influencing the income of the web advertisement, the genetic algorithm for interval optimization is proposed to find the corresponding intervals of those factors according to the anticipant income in a certain interval. Finally, an appropriate allocation of the advertising cost is made by using the clustering algorithm to classify the keywords. Numerical experiments are given and the results show that the novel method is effective.
https://doi.org/10.1142/9789812774118_0071
In this paper, we describe our design and implementation of a flexible, efficient way for E-Commerce education by utilizing online game mode. We are building a multi-player E-Commerce simulating online game – ECGAME, which everyone can login in via Internet all over the world. The platform provides participants with a virtual integrated business world to experience and helps them to learn business rules more effectively. It also motivates participants to learn E-Commerce and sparks competitiveness.
https://doi.org/10.1142/9789812774118_0072
Widely supported by industry and research developments, Semantic Web Services appear to be the next generation of the service-oriented paradigm. Their discovery constitutes a great challenge. Although much work has been done on selecting semantic services, an exact selection model is still not defined, which could describe services and service selection in a powerful way. Moreover, little consideration was given to quantify the selection of services basing on both the capabilities and quality of service (QoS). Here, a QoS-based selection model of semantic services is proposed, and a service matching algorithm working under this model is presented. Also, two real-life examples are discussed to validate its implementation.
https://doi.org/10.1142/9789812774118_0073
In this paper we outline the architecture of a distributed Trust Layer that can be superimposed to metadata generators, like our Trust Assertion Maker. TAM is a tool allowing to produce metadata from multiple sources in an authority-based environment, where user's role is certified and associated to a trust value. These metadata can complement metadata automatically generated by document classifiers. Our ongoing experimentation is aimed at validating the role of a Trust Layer as a technique for automatically screening high quality metadata in a set of assertions coming from sources with different level of trustworthiness.
https://doi.org/10.1142/9789812774118_0074
Mining the time-stamped numerical data contained in web access logs is interesting for numerous applications (e.g. customer targeting, automatic updating of commercial websites or web server dimensioning). In this context, the algorithms for sequential patterns mining do not allow processing numerical information frequently. In previous works we defined fuzzy sequential patterns to cope with the numerical representation problem. In this paper, we apply these algorithms to web mining and assess them through experiments showing the relevancy of this work in the context of web access log mining.
https://doi.org/10.1142/9789812774118_0075
It is well known that Elliptic Curve Cryptograph (ECC) features the highest bit security. The author analyses the advantages of ECC by comparison, explores the mathematics base on elliptic curve and the complexity of discrete logarithm, and makes improvements on existing Elliptic Curve Digital Signature Algorithm (ECDSA) to accelerate the arithmetic speed and shorten time for data transmission. Better effects have been achieved by applying ECC into E-commerce to realize digital signature algorithm.
https://doi.org/10.1142/9789812774118_0076
Requirements for future Internet are implemented possibly in peer-to-peer network environments where all network nodes are equal to each other. Inspired by the similar features between immune systems and future Internet, based on the immune symmetrical network theory, a network service model in a peer-to-peer network environment is designed on our bio-network platform. To validate the feasibility of the model, we do experiments with different network service distributions and user requests towards each node. The results of the hops per request with time show that the users can acquire network services flexibly and efficiently.
https://doi.org/10.1142/9789812774118_0077
Bioinformatics is a promising and innovative research field which gains hope to develop new approaches for e.g. medical diagnostics. Appropriate methods for pre processing as well as high level analysis are needed. In this overview we highlight some aspects in the light of machine learning and neural networks. Thereby, we indicate crucial problems like the curse of dimensionality and give possibilities for overcoming. In an examplary application we shortly demonstrate a possible scenario in the field of mass spectrometry analysis in cancer detection. Despite of a high number of techniques dedicated to bioinformatic research as well as many successful applications, we are in the beginning of a process to massively integrate the aspects and experiences in the different core subjects such as biology, medicine, computer science, chemistry, physics, and mathematics.
https://doi.org/10.1142/9789812774118_0078
High resolution HPLC data of a tomato germplasm collection are studied: Analysis of the molecular constituents of tomato peels from 55 experiments is conducted with focus on the visualization of the plant interrelationships, and on biomarker extraction for the identification of new and highly abundant substances at a wavelength of 280nm. 3000-dimensional chromatogram vectors are processed by state-of-the-art and novel methods for baseline correction, data alignment, biomarker retrieval, and data clustering. These processing methods are applied to the tomato data set and the results are presented in a comparative manner, thereby focusing on interesting clusters and retention times of nutritionally valuable tomato lines.
https://doi.org/10.1142/9789812774118_0079
Selecting relevant features in mass spectra analysis is important both for classification and search for causality. In this paper, it is shown how using mutual information can help answering to both objectives, in a model-free nonlinear way. A combination of ranking and forward selection makes it possible to select several feature groups that may lead to similar classification performances, but that may lead to different results when evaluated from an interpretability perspective.
https://doi.org/10.1142/9789812774118_0080
With the increasing amount of data nowadays produced in the field of proteomics, automated approaches for reliable protein identification are highly desirable. One widely-used approach are protein mass fingerprints (PMFs) that allow database searching for the unknown protein, based on a MALDI-TOF mass spectrum of its tryptic digest. Current approaches and software packages for interpreting PMFs do rarely make use of peak intensities in the measured spectrum, mostly due to the difficulty of predicting peak intensities in the simulated mass spectra. In this work, we address the problem of predicting peak intensities in MALDI-TOF mass spectra, and we use regression support vector machines (ν-SVR) for this purpose. We compare the impact of different preprocessing and normalization modes such as binning and balancing data sets on prediction accuracy. Our preliminary results indicate that we can predict peak intensities using ν-SVR even from very small data sets. It is reasonable to assume that peak intensity prediction can greatly improve automated peptide identification.
https://doi.org/10.1142/9789812774118_0081
We consider the problem of how to use automated techniques to learn simple and compact classification rules from microarray gene expression data. Our approach employs the traditional “genetic programming” (GP) algorithm as a supervised categorization technique, but rather than applying GP to gene expression vectors directly, it applies GP to “enhanced feature vectors” obtained by preprocessing the gene expression data using the Gene Ontology and PIR ontologies. On the two datasets considered, this “GP + enhanced feature vectors” combination succeeds in producing compact and simple classification models with near-optimal classification accuracy. For sake of comparison, we also give results from the combination of support vector machine classification and enhanced feature vectors on the same datasets.
https://doi.org/10.1142/9789812774118_0082
In this paper a technique to improve protein secondary structure prediction is proposed. The approach is based on the idea of combining the results of a set of prediction tools, choosing the most correct parts of each prediction. The correctness of the resulting prediction is measured referring to accuracy parameters used in several editions of CASP. Experimental evaluations validating the proposed approach are also reported.
https://doi.org/10.1142/9789812774118_0083
Cancer immunoprevention vaccines are based, like all vaccines, on drugs which gives to the immune system the necessary information to recognize tumor cells as harmful. The vaccine, in endogeneous tumors, cannot eliminate all tumor cells, but maintains them to a non dangerous level. In this paper we show that the vaccine's administrations acts as a stabilizing perturbation for the immune system. Results suggest that it is possible to model such an effect using ODE based technique with an “external input”.
https://doi.org/10.1142/9789812774118_0084
In this paper, we address the challenging task of learning accurate classifiers from micro-array datasets involving a large number of features but only a small number of samples. We present a greedy step-by-step procedure (SSFS) that can be used to reduce the dimensionality of the feature space. We apply the Minimum Description Length principle to the training data for weighting each feature and then select an “optimal” feature subset by a greedy approach tuned to a specific classifier. The Acute Lymphoblastic Leukemia dataset is used to evaluate the effectiveness of the SSFS procedure in conjunction with different state-of-the-art classification algorithms.
https://doi.org/10.1142/9789812774118_0085
The performance of a Learning Classifier System (LCS) applied to the classification of simplified hydrophobic/polar (HP) lattice model proteins was compared to other machine learning (ML) algorithms. The GAssist LCS classified functional HP model proteins on the 3D diamond lattice as folding or non-folding at 88.3% accuracy, outperforming significantly three out of the four other methods. GAssist correctly classified HP model protein instances on the basis of Contact Number (CN) and Residue Exposure (RE) on both 2D square and 3D cubic lattices at a level of between 27.8% and 80.9%. Again, the LCS performed at a level comparable to the other ML technologies in this task outperforming significantly them in 24 out of 180 cases, and being outperformed just six times. The benefits of using LCS for this problem domain are discussed and examples of the LCS generated rules are described.
https://doi.org/10.1142/9789812774118_0086
Protein structure prediction is a powerful tool in today's drug design industry as well as in the molecular modeling stage of x-ray crystallography research. This paper proposes a redefined encoding scheme based on the combination of Chou-Fasman parameters, physico-chemical parameters and position specific scoring matrix for protein secondary structure prediction. A new method of calculating the reliability index based on the number of votes and the SVM decision value is also proposed and it has been shown to assist design of better filters. The proposed features are then tested on the RS126 and CB513 datasets and shown to give better cross-validation results compared to the existing techniques.
https://doi.org/10.1142/9789812774118_0087
Exploratory analysis of genomic data sets using unsupervised clustering techniques is often affected by problems due to the small cardinality and high dimensionality of the data set. A way to alleviate those problems lies in performing clustering in an embedding space where each data point is represented by a vector of its memberships to fuzzy sets characterized by a set of probes selected from the data set. This approach has been demonstrated to lead to significant improvements with respect the application of clustering algorithms in the original space and in the distance embedding space. In this paper we propose a constructive technique based on Simulated Annealing able to select sets of probes of small cardinality and supporting high quality clustering solutions.
https://doi.org/10.1142/9789812774118_0088
We compare two ensemble methods to classify DNA microarray data. The methods use different strategies to face the course of dimensionality plaguing these data. One of them projects data along random coordinates, the other compresses them into independent boolean variables. Both result in random feature extraction procedures, feeding SVMs as base learners for a majority voting ensemble classifier. The classification capabilities are comparable, degrading on instances that are acknowledged anomalous in the literature.
https://doi.org/10.1142/9789812774118_0089
A novel Grey forecasting model for predicting S. sclerotiorum (Lib) de Bary disease on winter rapeseed (B. napus) is built based on Grey GM (1,1) model. The residual error test and the posterror test methods were used for calibration of the model. Different from other conventional forecasting methods, the GM (1,1)-based Grey calamity prediction forecasts a prediction ΔT to infer the probable year of Sclerotinia disease outbreaks according to the origin, and then uses the result to recommend spraying a field or not in order to avoid unnecessary fungicide application. Based on practical experiments in Hunan province, the threshold (≀) of the disease rate at the time when winter rapeseed begins to flower is defined as 5, and f < 5 is called a down-calamity. The Grey forecasting model was tested at the 7 stations in 2004 and 2005 and predicted the probable year of Sclerotinia disease outbreaks and the need for fungicide application with the first-class grade and high accuracy.
https://doi.org/10.1142/9789812774118_0090
The identification of seismic active periods and episodes in spatio-temporal data is a complex scale-related clustering problem. Clustering by scale-space filtering is employed to give a quantitative basis for their identification. Visualization methods are employed to facilitate researchers to interactively assess and judge the clustering results by their domain specific experience in order to obtain the optimal segmentation of the seismic active periods and episodes. The real-life applications in strong earthquakes occurred in Northern China confirms the effectiveness of such an integrative approach.
https://doi.org/10.1142/9789812774118_0091
By embedding the fuzzy neural networks, we construct a fuzzy approximation network perturbation system based on the human knowledge. A survey of the principle of this system is presented including the architectures and hybrid learning rules. In order to enhance the traffic productivity, using Fuzzy Logic Toolbox of MATLAB, we establish a prediction modeling and apply it to risk analysis in transportation capacity. In terms of the simulation result, the modeling is quite good and decreases risk in transportation capacity.
https://doi.org/10.1142/9789812774118_0092
Adopt the “memory” and “simulation” of the artificial neural networks to do the flood forecast because the advantages of the neural networks can be used to simulate and record the relationship of the input and the output on the complex “function” through the training and the learning based on the historical data without any mathematics models. For this research, the authors proposed a new flood forecast system with the related applications based on the neural networks methods. It was shown the better performance and the efficiency results. It will be expected that this system application will become more sensitive to increase a better performance for the flood forecast.
https://doi.org/10.1142/9789812774118_0093
The meaning of marine security has been interpreted in point view of synthesis risk, and by integrating 4S (GIS, GPS, RS and DSS) the management system has been designed. Based on this, the integrated management pattern of marine security synthesis risk has been provided. This management system has been taken as the management tool. Through the risk calculation and fuzzy appraisal, the harm and loss that risk has brought may be estimated, and the forecast or prewarn of risk also can be achieved. By the response mechanism of emergency management, replying and reducing risk influence may be realized. The whole process depends on the system that has been built.
https://doi.org/10.1142/9789812774118_0094
This paper analyzed urban rainstorm water logging disaster in Tianjin city based on statistics and numerical simulation. Firstly, the basic theory of the urban rainstorm water logging mathematical model was introduced and used to simulate various rain process conditions according to the features of the rainstorm and the draining rules. Secondly, the water logging disaster distribution and its influences on traffic were primarily evaluated. Finally, some management and mitigation measures of the urban rainstorm water logging disaster were discussed.
https://doi.org/10.1142/9789812774118_0095
Tongliao faces such environmental risk problems such as soil erosion, land desertification and so on. Quantitatively analyze the factors that affected the region's environmental risk using the principal components analysis method. The result indicates that: the dominant factors that cause the environmental risk are: industrial gross output, sewage treatment rate, chemical fertilization rate, the cultivated area and so on; the environmental risk is the result that the natural factor and the human factor affect together, and human factor is more important; the natural factor and the human factor influence the environment through the threatening function.
https://doi.org/10.1142/9789812774118_0096
Due to the fact that flood data series for small drainage basins is lacking, data that can be used for flood risk analysis is insufficient (Incomplete information). This is the risk analysis under small sample conditions. One method for the analysis of problems of this kind is to consider the small sample as fuzzy information. The optimized fuzzy information can then be processed using the information diffusion theory to obtain a result with higher reliability for risk analysis. Small samples can only supply limited and incomplete information. Statistical rules cannot be clearly demonstrated with only this information. Fortunately, the incomplete information, especially the fuzzy information supplied from the small sample, can be treated using the fuzzy information optimizing technology, based on the information diffusion theory. In this paper the risk analysis method, based on information diffusion theory, was used to advance a model for flood risk analysis. Application of the model was also illustrated taking the Jinhuajiang and Qujiang drainage basins of China as examples. The study indicated that the above model exhibited a fairly stable analytical end result, even in a small sample condition. The method can be applied easily and its analytical end result is easy to understand. It may play a guiding role on disaster prevention to some extent.
https://doi.org/10.1142/9789812774118_0097
Based on the idea of fitting function curve with neural network, this paper establishes the neural network models for estimating agricultural drought quantitatively and describing its probability distribution. The models based on this idea can avoid the inconvenience of establishing concrete mathematical formulas and the calculation of parameters, and this method can fit the probability function when the theoretic distribution of the random variable is unknown. On the choice of training method of network, gradient descending method is combined with chaos algorithm, which can make network find the global minimum quickly. Finally this paper calculates probability distribution of the drought extent of agriculture in Qucun irrigation area, Puyang city, Henan province, validating the correctness of the models.
https://doi.org/10.1142/9789812774118_0098
Response mechanism to harmony among departments for emergency managements is presented for the first time. Then a computer simulation model for request-response is built basing on the response mechanism to harmony. Finally, some mathematical methods involved in the process of implementing the simulation model are discussed.
https://doi.org/10.1142/9789812774118_0099
This paper presents a novel approach to mobile robot environment modeling based on principal component analysis of ultrasonic sensors array data. A principal components space, which has the less dimensionality than the raw data space, is constructed from the principal components of a large number of ultrasonic sensors data sets. Subsequent ultrasonic data sets from the environment project as a point in this principal components space. By application of SVM (support vector machine) method, these projections are classified into typical local structures of the environment in order for the robot to discriminate them. The experimental results will be provided to show that the proposed method is satisfactory for mobile robot navigation applications.
https://doi.org/10.1142/9789812774118_0100
This paper presents a solution to the Simultaneous Localization and Map Building (SLAM) problem for a mobile agent which navigates in an indoor environment and it is equipped with a conventional laser range finder. The approach is based on the stochastic paradigm and it employs a novel feature-based approach for the map representation. Stochastic SLAM is performed by storing the robot pose and landmark locations in a single state vector, and estimating it by means of a recursive process. In our case, this estimation process is based on an extended Kalman filter (EKF). The main novelty of the described system is the efficient approach for natural feature extraction. This approach employs the curvature information associated to every planar scan provided by the laser range finder. In this work, corner feature has been considered. Real experiments carried out with a mobile robot show that the proposed approach acquires corners of the environment in a fast and accurate way. These landmarks permit to simultaneously localize the robot and build a corner-based map of the environment.
https://doi.org/10.1142/9789812774118_0101
Biomimetic robot fish can play an important role in underwater object reconnaissance and tracking. In order to fulfill more complex tasks, the capability of obstacle avoidance is an indispensable one for robot fish. The design of biomimetic robot fish with multiple ultrasonic and infrared sensors is presented in the paper. An obstacle avoidance strategy based on the reinforcement learning is proposed. State-behavior pairs are obtained. Computer simulation shows the validity of the learning results.
https://doi.org/10.1142/9789812774118_0102
In this paper we describe a methodology for obtaining modular artificial neural network based control architectures for snake-like robots automatically. This approach is based on the use of behavior modulation structures that are incrementally evolved using macroevolutionary algorithms. The method is suited for problems that can be solved through a progressive increase of the complexity of the controllers and for which the fitness landscapes are mostly flat with sparse peaks. This is usually the case when robot controllers are evolved using life simulation for evaluating their fitness.
https://doi.org/10.1142/9789812774118_0103
The singularity analysis of a parallel manipulator is often very complicated. There exist multi-criterions to determine the singular behavior of a parallel manipulator, for example: rank condition criterion of a screw set χ of the parallel mechanism, second order criterion of screw set χ, transverse criterion etc. This paper aims to explain a general method to analyze the relationship between the different criterions of singular configurations with the help of the decision tree method for the first time. We find that the second order criterion is the most important one to determine whether the singular configuration is bifurcated through a large amount of parallel manipulator analysis.
https://doi.org/10.1142/9789812774118_0104
This paper presents an efficient method for automatic training of performant visual object detectors, and its successful application to training of a back-view car detector. Our method for training detectors is adaBoost applied to a very general family of visual features (called “control-point” features), with a specific feature-selection weak-learner: evo-HC, which is a hybrid of Hill-Climbing and evolutionary-search. Very good results are obtained for the car-detection application: 95% positive car detection rate with less than one false positive per image frame, computed on an independant validation video. It is also shown that our original hybrid evo-HC weak-learner allows to obtain detection performances that are unreachable in reasonable training time with a crude random search. Finally our method seems to be potentially efficient for training detectors of very different kinds of objects, as it was already previously shown to provide state-of-art performance for pedestrian-detection tasks.
https://doi.org/10.1142/9789812774118_0105
Two premier issues in eddy current testing (ECT), that are, the automatic identification of cracks from their noisy backgrounds and sizing of cracks with complex morphologies, are dealt in this paper. A signal processing method based on statistical pattern recognition is established to realize automatic crack identification. And a new method for estimating crack depth is proposed by utilizing a new concept depth sizing index which is analytically constructed from raw measurement signals based on system optimization theory.
https://doi.org/10.1142/9789812774118_0106
Sign language is a method of communication that uses facial expressions and gestures. However, not only absolute, natural learning and interpretation of sign language are very difficult, but also takes a long time to represent and translate it fluently in hearing person. Consequently, we design and implement the real time-sentential the Korean Standard Sign Language (hereinafter, KSSL) recognition system using fuzzy logic and wireless haptic devices based on the post wearable PC. Experimental result shows the average recognition rate of 93.9% about dynamic and continuous the KSSL.
https://doi.org/10.1142/9789812774118_0107
This paper addresses the issue of the intuitionistic defuzzification of digital images. Based on a recently introduced framework for intuitionistic fuzzy image processing, the validity conditions and properties of different intuitionistic defuzzification schemes are studied under the scope of performing contrast enhancement.
https://doi.org/10.1142/9789812774118_0108
In this work we present a heuristic approach to the intuitionistic fuzzification of color images; that is constructing the intuitionistic fuzzy set corresponding to a color digital image. Exploiting the physical properties and drawbacks of the imaging and acquisition chain, we model the uncertainties present in digital color images, using the concept of fuzzy histogram using fuzzy numbers.
https://doi.org/10.1142/9789812774118_0109
In this paper we propose a query image retrieval technique using fuzzified features extracted from the co-occurrence matrices. Among the fourteen features proposed by Haralick, three features Angular secondary momentum (ASM), Entropy and Contrast are computed from crisp and fuzzy co-occurrence matrices. The above said features are also computed by proposed new formulae based on fuzzy and Intuitionistic fuzzy concepts. The fuzzy features are compared with the crisp features by sensitivity analysis and the superiority of the former over the latter is established. The above features are used for query image retrieval on monochrome as well as color images. It is established that in noisy environment, the proposed fuzzy features successfully retrieve the query, where the crisp feature either fail to retrieve the query or result in multi faulty retrievals.
https://doi.org/10.1142/9789812774118_0110
Although fuzzy logic methods are of great interest in many GIS applications, the traditional fuzzy logic has two important deficiencies. First, to apply the fuzzy logic, we need to assign, to every property and for every value, a crisp membership function. Second, fuzzy logic does not distinguish between the situation in which there is no knowledge about a certain statement and a situation that the belief to the statement in favor and against is the same. Due to this fact, it is not recommended for problems with missing data and where grades of membership are hard to define. In this paper, a simple fuzzy region and fundamental concepts for uncertainty modeling of spatial relationships are analyzed from the view point of intuitionistic fuzzy (IF) logic. We demonstrate how it can provide model for fuzzy region; i.e., regions with indeterminate boundaries. As a proof of our idea, the paper will discuss the process of creating thematic maps using remote sensing satellite imagery.
https://doi.org/10.1142/9789812774118_0111
Simulators based on Virtual Reality (VR) provide significant benefits over other methods of training, mainly in critical procedures. The assessment of training performed in this kind of system is necessary to know the training quality and provide some feedback about the user performance. Because VR simulators are real-time systems, on-line evaluation tools attached to them must have a low complexity algorithm to do not compromise the performance of the simulators. This work presents a new approach to on-line evaluation using an assessment tool based on Fuzzy Bayes Rule for modeling and classification of simulation in pre-defined classes of training. This method allows the use of continuous variables without loss of information. Results of its application are provided and compared with another evaluation system based on classical Bayes rule.
https://doi.org/10.1142/9789812774118_0112
Gynecological cancer is one of the most common causes of cancer-related deaths in women. The training of new professionals depends on the observation of cases in real patients. To improve this training, was developed a simulator based on virtual reality that shows pathologies that can cause the gynecological cancer. This paper presents an evaluation tool developed for this simulator to assess the student knowledge in a situation presented and classify his training. This on-line evaluation tool uses two fuzzy rule-based expert systems to monitor the two stages of the virtual training.
https://doi.org/10.1142/9789812774118_0113
Online videogames are not just entertainment products but also can be used as environments for Multi-agent Systems (MAS) where agents can reach multiple sources of knowledge from which to learn. In particular, videogames are especially well suited for learning Human-Level Artificial Intelligence, because they strengthen human-agent interaction. In this paper we present Screaming Racers, a simple car-racing online videogame designed to experiment with MAS which learn to drive racing cars along competition tracks. We also present our car-driving learning results with several Neuroevolution techniques, including Neuroevolution of Augmenting Topologies (NEAT).
https://doi.org/10.1142/9789812774118_0114
The urban traffic jams is daily problem in large cities. Urban traffic control system aims to solve this problem. The major difficulty in urban traffic is great number of variables in order to present the traffic state such as flow, speed, density. Another bothering characteristic of urban traffic is lack of precise relation between these variables. These problems make researchers to develop several traffic models to describe the traffic behavior and build control on them. In this paper, we present a new model of traffic flow and introduce a control for this model that helps agents to control traffic autonomously.
https://doi.org/10.1142/9789812774118_0115
Smart Homes are the subject of intensive research to explore different ways in which they can be used to provide home-based preventive and assistive technology to patients and other vulnerable sectors of the population like the elderly and frail. Current studies of Smart Homes are focused on the technological side (e.g., sensors and networks) but little effort has been given to what we consider is a key aspect of these kind of systems, that is their capability to intelligently monitor situations of interest and advise or act to the best interest of the home occupants. This paper investigates the importance of spatio-temporal reasoning and uncertainty reasoning in the design of Smart Homes. Accordingly a framework of applying a methodology referred as Rule-base Inference Methodology using the Evidential Reasoning in conjunction with Smart Home framework considering spatio-temporal aspects of human activities monitoring is outlined.
https://doi.org/10.1142/9789812774118_0116
Malfunctions in machinery are often sources of reduced productivity and increased maintenance costs in various industrial applications. For this reason, machine condition monitoring has been developed to recognize incipient fault states. In this paper, the fault diagnostic problem is tackled within a neuro-fuzzy approach to pattern classification. Besides the primary purpose of a high rate of correct classification, the proposed neuro-fuzzy approach aims at obtaining also a transparent classification model. To this aim, appropriate coverage and distinguishability constraints on the fuzzy input partitioning interface are used to achieve the physical interpretability of the membership functions and of the associated inference rules. The approach is applied to a case of motor bearing fault classification.
https://doi.org/10.1142/9789812774118_0117
The steering wheel automatic control for autonomous vehicles is presently one of the most interesting challenges in the intelligent transportation systems field. A few years ago, the researchers had to adapt motors or hydraulic systems in order to automatically manage the trajectory of a vehicle but, due to the automotive industry technological development, a new set of tools built into the mass produced cars allow the feasibility to be computer-controlled. This is the case of the electronic fuel injection, sequential automatic gearbox or the Electric Power Steering (EPS). In this paper we present the development of an autonomous vehicle's EPS control, based on a two layer fuzzy controller. The necessary computer and electronic equipment has been installed in a Citroën C3 Pluriel mass produced testbed vehicle and a set of experiments has been carried out to demonstrate the feasibility of the presented controllers in real situations.
https://doi.org/10.1142/9789812774118_0118
The acceleration sensor's fault-tolerance principles and methods of tilting trains are discussed based on the practical running conditions of the first tilting train in China. The 2/3(G) vote redundant fault-tolerance strategy is feasible for the acceleration sensor's fault-tolerance technology. All the algorithms are tested in simulating conditions by data from the test-on-line. A program based on them has been developed and performs well in the DSP system and has passed the preliminary testing in the test-on-line.
https://doi.org/10.1142/9789812774118_0119
Societal problems in nuclear systems are typically complex and characterized by scientific uncertainties. This paper describes an abstraction approach to solving such problems by using Risk-Risk Analysis (RRA). A Fuzzy Analytic Hierarchy Process (FAHP) framework is used to combine RRA elements with the normative elements of the Precautionary Principle (PP) and inform regulators on the relative ranking of possible regulatory actions/safety alternatives. To illustrate the utility of this framework, the deep geological repository experience (1991-2006) of the French Nuclear Safety Authority Autorité de sûreté nucléaire (ASN) is abstracted and solved with hypothetical fuzzy rankings. The results demonstrate that such a framework could indeed help demonstrate the objectivity and transparency in regulatory decision-making to multiple-stakeholders.
https://doi.org/10.1142/9789812774118_0120
The international nuclear proliferation control regime is facing the necessity to handle increasing amounts of information in order to assess States' compliance with their safeguards undertakings. An original methodology is proposed that supports the analyst in the State evaluation process by providing a synthesis of open source information. The methodology builds upon IAEA nuclear proliferation indicators and develops within the framework of fuzzy sets. In order to aggregate the indicators' values, both the reliability of the information sources and the relevance of the indicators are taken into account. Two distinct logical systems based on fuzzy sets are addressed to identify appropriate aggregation tools: possibility theory and fuzzy logic.
https://doi.org/10.1142/9789812774118_0121
With the present probabilistic analysis the author proposes a procedure based on financial-option theory for determining a triangular fuzzy number representing the discount rate for use in the economic calculus of radioactive waste management. The adequate present funding to be set aside for the future is equal to the net present value (NPV) of the future costs, including technical-scenario uncertainties. For taking into account the financial uncertainties, the NPV is identified with the strike price of a European put option calculated with the Black-Scholes formula, and the asset value in the managed fund is identified with the current price.
https://doi.org/10.1142/9789812774118_0122
This paper describes how the Evidential Reasoning approach for multi-criteria decision analysis, with the support of its software implementation, Intelligent Decision System (IDS), can be used to analyse whether low level radioactive waste should be stored at the surface or buried deep underground in the territory of the community of Mol in Belgium. Following an outline of the problem and the assessment criteria, the process of using IDS for data collection, information aggregation and outcome presentation is described in detail. The outcomes of the analysis are examined using the various sensitivity analysis functions of IDS. It is demonstrated that the analysis using IDS can generate informative outcomes to support the decision process.
https://doi.org/10.1142/9789812774118_0123
This work develops the basis for a expert system, which is able to infer about a generic structure of indicators, in such a way that it monitor, measures and evaluates questions related to project, operational safety and human performance according to politics, objectives and goals for the nuclear power plant Angra 2. This structure organized in graphs and in a context of objects orientation, represents the knowledge of the expert system and is mapped out in a fuzzy concept. The inference engine is of a backward chaining type associated to a process of search in depths, in such a way that it establishes the representative status of the plant, making possible to analyze and manage of its mission and situation.
https://doi.org/10.1142/9789812774118_0124
Molecular dynamics (MD) is the only computational tool that, without any approximation, can be applied to study atomic level phenomena involving radiation-produced point-defects and their evolution driven by mutual interaction. Kinetic Monte Carlo (KMC) is suitable to extend the timeframe of an atomic level simulation to simulate an irradiation process, but all mechanisms involved must be known in advance, e.g. from MD studies. The present work is an effort to integrate artificial intelligence (AI) techniques, essentially an artificial neural network (ANN) and a fuzzy logic system (FLS), in order to build an adaptive model capable of improving both the physical reliability and the computational efficiency of a KMC model fully parameterized on MD results.
https://doi.org/10.1142/9789812774118_0125
This work focuses on the use of the Artificial Intelligence metaheuristic technique Particle Swarm Optimization (PSO) to optimize a nuclear reactor fuel reloading. This is a combinatorial problem, in which the goal is to find the best feasible solution, minimizing a specific objective function. However, in the first moment it is possible to compare the fuel reloading problem with the Traveling Salesman Problem (TSP), since both of them are combinatorial and similar in terms of complexity, with one advantage: the evaluation of the TSP objective function is more simple. Thus, the proposed method has been applied to two TSPs: Oliver 30 and Rykel 48. In 1995, KENNEDY and EBERHART presented the PSO technique to optimize non-linear continuous functions. Recently some PSO models for discrete search spaces have been developed for combinatorial optimization, although all of them have different formulation from the one presented in this work. Here we use the PSO theory associated with to the Random Keys (RK) model, used in some optimizations with Genetic Algorithms, as a way to transform the combinatorial problem into a continuous space search. The Particle Swarm Optimization with Random Keys (PSORK) results from this association, which combines PSO and RK. The adaptations and changings in the PSO aim to allow the appliance of the PSO at the nuclear fuel reloading problem. This work shows the PSORK applied to the TSP and the obtained results as well.
https://doi.org/10.1142/9789812774118_0126
A genetic algorithm (GA) is used to search the parameters of a scaled experiment similar to a reactor pressurizer. The dimensionless similarity numbers are multipliers in the non-dimensional conservation equations of the pressurizer. A “fitness function” of the similarity numbers evaluates the quality of the parameters of the scaled experiment in relation to the full-size pressurizer. Once the parameters are defined, the operation of experiment is verified comparing the non-dimensional pressure of a typical transient in the pressurizer and experiment.
https://doi.org/10.1142/9789812774118_0127
This work proposes the use of the Particle Swarm Optimization (PSO) algorithm as an alternative tool for solving the nuclear core reload problem. The key of this work is the use of PSO for solving such kind of combinatorial problem. As the PSO (original version) is only skilled for numerical optimization, an indirect (floating-point) encoding of solution candidates is obtained by applying the random-keys approach. Here, the method is introduced and its application to a practical example is described. Computational experiments demonstrate that, for the sample case, PSO performed better than a Genetic Algorithm (GA). Obtained results are shown and discussed in this paper.
https://doi.org/10.1142/9789812774118_0128
The nuclear reactor core reload pattern optimization problem consists in finding a pattern of partially bumed-up and fresh fuels that optimizes the next operation cycle. This problem has been traditionally solved using an expert's knowledge, but recently artificial intelligence techniques have also been applied successfully. This problem is NP-hard, meaning that complexity grows exponentially with the number of fuel assemblies in the core. Besides that, the problem is non-linear and its search space is highly discontinuous and multimodal. The aim of this work is to apply parallel computational systems based on the Population-Based Incremental Learning (PBIL) algorithm and on Ant Colony System (ACS) to the core reload pattern optimization problem and compare the results to those obtained by the genetic algorithm and by a pattern obtained by an expert. The case study is the optimization of cycle 7 of the Angra 1 Pressurized Water Reactor.
https://doi.org/10.1142/9789812774118_0129
Traditionally, the calibration of safety critical nuclear instrumentation has been performed at each refueling cycle. However, many nuclear plants have moved toward condition-directed rather than time-directed calibration. This condition-directed calibration is accomplished through the use of on-line monitoring which commonly uses an autoassociative predictive modeling architecture to assess instrument channel performance. An autoassociative architecture predicts a group of correct sensor values when supplied with a group of sensor values that is corrupted with process and instrument noise, and could also contain faults such as sensor drift or complete failure.
This paper introduces two robust distance measures for use in nonparametric, similarity based models, specifically the L1-norm and the new robust Euclidean distance function. In this paper, representative autoassociative kernel regression (AAKR) models are developed for sensor calibration monitoring and tested with data from an operating nuclear power plant using the standard Euclidean (L2-norm), L1-norm, and robust Euclidean distance functions. It is shown that the alternative robust distance functions have performance advantages for the common task of sensor drift detection. In particular, it is shown that the L1-norm produces small accuracy and robustness improvements, while the robust Euclidean distance function produces significant robustness improvements at the expense of accuracy.
https://doi.org/10.1142/9789812774118_0130
This paper proposes the use of Multiple Objective Evolutionary Algorithms (MOEA) for robust system design. A numerical example relative to the evaluation of the reliability of a Residual Heat Removal system of a Nuclear Power Plant illustrates the approach.
https://doi.org/10.1142/9789812774118_0131
Multi-objective genetic algorithms can be effective means for choosing the process features relevant for transient diagnosis. The technique allows identifying a family of equivalently optimal feature subsets, in the Pareto sense. However, difficulties in the convergence of the standard Pareto-based multi-objective genetic algorithm search in large feature spaces may arise in terms of representativeness of the identified Pareto front whose elements may turn out to be unevenly distributed in the objective functions space. To overcome this problem, a modified Niched Pareto Genetic Algorithm is embraced in this work. The performance of the feature subsets examined during the search is evaluated in terms of two optimization objectives: the classification accuracy of a Fuzzy K-Nearest Neighbors classifier and the number of features in the subsets. During the genetic search, the algorithm applies a controlled “niching pressure” to spread out the population in the search space so that convergence is shared on different niches of the Pareto front. The method is tested on a diagnostic problem regarding the classification of simulated transients in the feedwater system of a Boiling Water Reactor.
https://doi.org/10.1142/9789812774118_0132
In this work, the stable Direct Fuzzy Model Reference Adaptive Control is integrated with a Genetic Algorithm search of the optimal values of the critical controller parameters. The optimized controller is applied to a model of nuclear reactor dynamics.
https://doi.org/10.1142/9789812774118_0133
The present work addresses the problem of on-line signal trend identification within a fuzzy logic-based methodology previously proposed in the literature. A modification is investigated which entails the use of singletons instead of triangular fuzzy numbers for the characterization of the truth values of the six parameters describing the dynamic trend of the evolving process. Further, calibration of the model parameters is performed by a genetic algorithm procedure.
https://doi.org/10.1142/9789812774118_0134
In this paper, the task of identifying transients in nuclear systems is tackled by means of a possibilistic fuzzy classifier devised in such a way to recognize the transients belonging to a priori foreseen classes while filtering out unforeseen plant conditions, independently from the operational state of the plant before the transient occurrence. The classifier is constructed through a supervised evolutionary procedure which searches geometric clusters as close as possible to the real physical classes.
The proposed approach is applied to the classification of simulated transients in the feedwater system of a boiling water reactor.
https://doi.org/10.1142/9789812774118_0135
On-Line Monitoring evaluates instrument channel performance by assessing its consistency with other process indications. In nuclear power plants the elimination or reduction of unnecessary field calibrations can reduce associated labor costs, reduce personnel radiation exposure, and reduce the potential for calibration errors. A signal validation system based on fuzzy-neural networks, called PEANO [1], has been developed to assist in on-line calibration monitoring. Currently the system can be applied to a limited set of channels. In this paper we will explore different grouping algorithms to make the system scalable and applicable in large applications.
https://doi.org/10.1142/9789812774118_0136
A fundamental requirement on data to be used for the training of an empirical diagnostic model is that the data has to be sufficiently similar and consistent with what will be observed during on-line monitoring. In other words, the training data has to “cover” the data space within which the monitored process operates. The coverage requirement can quickly become a problem if the process to be monitored has a wide range of operating regimes leading to large variations in the manifestation of the faults of interest in the observable measurement transients. In this paper we propose a novel technique aimed at reducing the variability of fault manifestations through a process of “intelligent normalization” of transients. The approach that we propose in this paper is to use a neural network to perform this mapping. The neural network uses the information contained in all monitored measurements to compute the appropriate mapping into the corresponding normalized state. The paper includes the application of the proposed method to a nuclear power plant transient classification case study.
https://doi.org/10.1142/9789812774118_0137
The development of a user interface whose main purpose is the testing of new power control algorithms in a TRIGA Mark III reactor is presented. The interface is fully compatible with the current computing environment of the reactor's operating digital console. The interface, developed in Visual Basic, has been conceived as an aid tool in the testing and validation of new and existing algorithms for the ascent, regulation and decrease of the reactor power. The interface calls a DLL file that contains the control program, makes available to the user the plant and controller parameters, and displays some of the key variables of the closed loop system. The system also displays the condition of the reactor with respect to the nuclear safety constraints imposed by the Mexican Nuclear Regulatory Commission (CNSNS). One of the algorithms under test is based on a control scheme that uses variable state feedback gain and prediction of the state gain that guarantee the compliance of the safety constraints.
https://doi.org/10.1142/9789812774118_bmatter
AUTHOR INDEX.