![]() |
FLINS, an acronym introduced in 1994 and originally for Fuzzy Logic and Intelligent Technologies in Nuclear Science, is now extended into a well-established international research forum to advance the foundations and applications of computational intelligence for applied research in general and for complex engineering and decision support systems.
The principal mission of FLINS is bridging the gap between machine intelligence and real complex systems via joint research between universities and international research institutions, encouraging interdisciplinary research and bringing multidiscipline researchers together.
FLINS 2020 is the fourteenth in a series of conferences on computational intelligence systems.
Sample Chapter(s)
Preface
Multi-view clustering via multiple kernel concept factorization
Contents:
Readership: Graduate students, researchers, and academics in artificial intelligence/machine learning, information management, decision sciences, databases/information sciences and fuzzy logic.
https://doi.org/10.1142/9789811223334_fmatter
The following sections are included:
https://doi.org/10.1142/9789811223334_0001
Multi-view clustering attracts more and more attention in the clustering field. Kernel trick and concept factorization can be applied when clustering linear non-inseparable multi-view datasets, but the selection of kernel function is difficult. To solve this problem, in this paper, we propose a novel algorithm called Multi-view Clustering via Multiple Kernel Concept Factorization, which uses a linear combination of multiple kernel functions to process original data. The weight of each view and kernel are learned automatically. Furthermore, experimental results show that the proposed method performs better than several state-of-the-art multi-view clustering methods.
https://doi.org/10.1142/9789811223334_0002
In this paper, we study the network energy of oriented graphs. We give several lower bounds and upper bounds on network energy of oriented graphs in terms of the number of vertices for oriented graphs, and establish upper bound to network energy of oriented graphs in terms of the number of vertices and probability for random oriented graphs. By comparing network energy of oriented graphs with skew energy of oriented graphs and network energy of underlying graphs, we obtain some relations between them.
https://doi.org/10.1142/9789811223334_0003
By simulating humanlike stylistic classification behaviors, a novel design methodology called S2CM for stylistic data classification is developed in this study. The core of S2CM is to build a social network consisting of subnetworks corresponding to each data class in the training dataset, and then compute both the influence of each node and the authority of each subnetwork such that style information existing in the training dataset can be well expressed according to the philosophy of social networks. With the built social network, the prediction of S2CM for an unseen sample can be cheaply implemented. Experimental results on artificial and benchmarking datasets show that S2CM outperforms the comparison methods on stylistic data.
https://doi.org/10.1142/9789811223334_0004
Machine learning-based intrusion detection systems are suffering from high dimensionality of network traffics. The presence of high dimensional data is resulting in low classification accuracy. In this work, we propose a bootstrap-based homogeneous ensemble feature selection (BHmEFS) method to select a subset of relevant and non-redundant features that improves classification accuracy. In this method, three sample data have been generated from the original dataset during the bootstrapping process. From each of the three acquired sample data, the Chi-square method selects essential features subsets. The intersection method combines these three output subsets employing a homogeneous approach to obtain the ensemble features subset. The performance of BHmEFS and the Chi-square method is evaluated by using the J48 classifier. We use each of the three bootstrap samples and the original dataset to evaluate the Chi-square method. The experimental results in a multi-class NSL-KDD dataset show that the BHmEFS achieves better classification accuracy when compared with the Chi-square and other methods.
https://doi.org/10.1142/9789811223334_0005
The train control system is a typical safety-critical system, the consistency of the state of the system process is an important prerequisite of the reliable decision-making for the train control system. This paper explores the properties of the decision-making of safety-critical system, then a process of safety-critical decision is proposed, with the definition of the critical component and cognate variables being presented. The formulations of consistency checking based computation tree logic is developed. A case of re-designed routing decision process of the train is modelled by MCMAS, the consistency verification shows that the new decision logical model ensures the confirmation of state consistency of the critical components, which can provide the enlightenment for the development of intelligent train control system.
https://doi.org/10.1142/9789811223334_0006
The general K-medoids clustering technology only divides the data into several clusters and there is no ordered relationship between any two clusters. In this paper, we propose a new ordered clustering algorithm named ordered K-medoids clustering inspired by the Preference Organization for Enrichment Evaluations method (PROMETHEE) in the context of multi-criteria decision aid, in order to address the problem of finding the inherent orders in clusters. We also analyze the advantages of the ordered K-medoids clustering algorithm through the comparisons with other ordered clustering algorithms.
https://doi.org/10.1142/9789811223334_0007
In this paper, a novel Simple but Effective (SE) model is proposed for aspect-based sentiment analysis (ABSA). The structure of SE is simple and mainly consists of three parts. Specifically, the model applies pre-trained RoBERTa to this task to get the context sequence and aspect embeddings firstly, then adopts bi-directional long short-term memory network to capture the context and aspect dependency, and finally obtains aspect-specific context representation by attention mechanism. The SE model performs not only aspect-term sentiment analysis (ATSA) but also aspect-category sentiment analysis (ACSA) tasks. Experiments are conducted on five datasets, including Multi-Aspect Multi-Sentiment (MAMS) datasets and three benchmark datasets, and the model combine with RoBERTa refreshes state-of-the-art results on all five datasets.
https://doi.org/10.1142/9789811223334_0008
One of the most famous stochastic local search (SLS) algorithms for SAT is ProbSAT, which has wide influence among current SLS algorithms and attracted increasing interest in the last few years. The present work aims at solving hard random SAT instances. More specifically, we use ProbSAT as the basis and propose a new clause selection strategy based on a new probability order to prioritize specific clauses in ProbSAT, leading to a new algorithm called ProbSAT+. The proposed algorithm is then evaluated through benchmarks in terms of its capabilities and efficiency in solving hard random SAT instances, e.g., benchmarks from the SAT Competitions in 2017. Experimental results show that ProbSAT+ significantly outperforms ProbSAT and other state-of-the-art SLS solvers on HRS instances.
https://doi.org/10.1142/9789811223334_0009
The rapid adoption of artificial intelligence in automating human-centred tasks has accentuated the importance of interpretable decisions. The Belief-Rule-Base (BRB) is a hybrid expert system that can accommodate human knowledge and capture nonlinear causal relationships as well as uncertainty. This paper presents the strategy to interpret BRB locally for a single instance to understand the decision-making process by the importance of activated rules and attributes and globally to understand most important rules and attributes in an entire rule base.
https://doi.org/10.1142/9789811223334_0010
In imbalanced learning, most supervised learning algorithms often fail to account for data distribution and present learning that is biassed towards the majority leading to unfavorable classification performance, particularly for the minority class samples. To tackle this problem, the ADASYN algorithm adaptively allocates weights to the minority class examples. A significant weight improves the possibility for the minority class sample serving as a seed sample in the synthetic sample generation process. However, it does not consider the noisy examples. Thus, this paper presents a modified method of ADASYN (M-ADASYN) for learning imbalanced datasets with noisy samples. M-ADASYN considers the distribution of minority class and creates noise-free minority examples by eliminating the noisy samples based on proximity to the original minority and majority class samples. The experimental outcomes confirm that the predictive performance of M-ADASYN is better than KernelADASYN, ADASYN, and SMOTE algorithm.
https://doi.org/10.1142/9789811223334_0011
This work intends to predict traffic congestions from high-speed IoT data streams using Complex Event Processing (CEP) engines. As CEP engines are reactive in nature and have static thresholds, we propose an unsupervised Genetic Algorithm based clustering procedure which classifies the traffic into congestion and no-congestion classes. It also enables the CEP rule engine to form complex events with adaptive thresholds which change with context. Extensive analysis of traffic features is implemented so as to identify the relationship between temporal, environmental and social features and their impact on the CEP rule formation. A high recall of 96.8% indicates better performance, with lesser false positives, over baseline, and multiple hypothesis test results further, strengthen the effectiveness of the proposed approach.
https://doi.org/10.1142/9789811223334_0012
Semi-supervised sparse feature selection methods based on graph learning has attracted the attention of related researchers because of its characteristics of selecting highly discriminative features, but most methods proposed in recent years apply the graph Laplacian without speculative capabilities, and do not consider feature manifolds. To solve this problem, this paper proposes a Hessian semi-supervised sparse feature selection algorithm in low dimensional space considering feature manifolds (HSLF). In this method, Hessian regularization was embedded to retains the local manifold structure better; and a Laplacian graph is constructed from feature perspective, so that the feature selection matrix is smooth for the feature manifold structure. Then, an efficient iterative method was proposed to solve the proposed objective function. Finally, the effectiveness of the proposed algorithm is verified by comparison with other related algorithms.
https://doi.org/10.1142/9789811223334_0013
Integrated production and outbound distribution scheduling (IPODS) problem consists of two combinatorial optimization problem which are known as “machine scheduling” and “vehicle routing” in the literature. There are many situations that will require production and distribution decisions have to be made together in the case that there is very limited time between production and distribution activities such as perishable products or products which have limited lifespan. In addition, make to order businesses based on just in time philosophy is another application area because of the zero-inventory level between the production and distribution phase. In this study, we developed a new Memetic Algorithm (MA) to obtain optimal or near-optimal solutions in a reasonable time. The performance of the algorithm is compared with the solutions of the mathematical model of the same problem studied by [1]. Computational results show that proposed MA is capable of finding optimal or near optimal solutions which are found by CPLEX in less than a minute.
https://doi.org/10.1142/9789811223334_0014
Complex Question Answering (QA) over KG (Knowledge Graph) is one of the hottest topics in Nature Language Processing research. One important subclass of complex questions is the temporal question. This paper proposes a triple-to-text-to-question method of automatic temporal question generation based on Wikidata and Wikipedia. Firstly, we extract the triples of <subject, predicate, object> from Wikidata based on the existing temporal questions benchmark. Secondly, we use a distant supervision method to obtain sentences with temporal expressions corresponding to the triples from Wikipedia. Then we propose a sentence simplification method to simplify the complex sentences. Finally, we devise a rule-based method to convert the sentences into temporal questions with answers. The results show that the generated temporal questions have high accuracy in grammar, semantics, and fluency. The generated temporal question-answer pairs can be used as a new benchmark for the task of temporal QA over KG.
https://doi.org/10.1142/9789811223334_0015
As an important dimensionality reduction approach and visualization tool for high-dimensional data, t-distributed Stochastic Neighbor Embedding (t-SNE) has been applied into various fields. It converts two different distances in the raw data space and the low-dimensional space to a Gaussian distribution and a specific thick-tailed distribution, the Student t-distribution, respectively. In this paper, we present an extension of t-SNE, named “t-copula SNE” by characterizing the correlation and the joint distribution in the lower-dimensional space using a t-copula function. A cost function is defined based on Kullback-Leibler divergences incorporating the Gaussian distribution and the t-copula-based joint distribution. By the utilization of gradient descent method, the cost function is optimized and the corresponding algorithm is developed. Several experiments are carried out to demonstrate the effectiveness of the proposed method in comparison with other related methods.
https://doi.org/10.1142/9789811223334_0016
The emergence of social media allows users to get opinions, suggestions, or recommendations from other users about complex information needs. Tasks, as the CLEF Social Book Search 2016 Suggestion Track, propose to pursue this issue. The originality is to deal with verbose queries of book recommendation in order to support users in searching for books in catalogues of professional meta-data and complementary social media (i.e. tags, authors, similar products). In this context, a new technique for community-of-books discovery based on frequent social information (i.e. tags, authors) of similar books are proposed for book recommendation. Our method allows detecting frequent sub-graphs of similar books and using them to enrich the results returned by a traditional information retrieval system. This approach is tested on a collection containing Amazon/LibraryThing book descriptions and a set of queries, extracted from the LibraryThing discussion forums.
https://doi.org/10.1142/9789811223334_0017
Streaming data mining is in use today in many industrial applications, but performance of the models is deteriorated by concept drift, especially when true labels are unavailable. This paper addresses the need of detecting concept drifts under unsupervised situation and proposes the Unsupervised Concept Drift Detection (UCDD) method. A cluster technique is first applied to determine artificial labels of the data set, then a fast drift detection algorithm is used to detect the boundary change between the labeled clusters. Through the empirical evaluation, the method demonstrates effectiveness on detecting various types of concept drifts.
https://doi.org/10.1142/9789811223334_0018
Online review often helps users gain product information effectively, and plays a major role in the user’s shopping decisions. However, the increasing number of review makes it difficult for users to find useful product information to help their purchasing decisions. This paper proposes the method for analyzing the usefulness of users review based on comprehensive selection rate from the perspective of emotional information and star rating contained in user review. By calculating the comprehensive selection rate of review data and combining with Bayesian theory, we prove that the user review data with the largest comprehensive selection rate are useful for user decision-making. Compared with other methods, the proposed method can help users find useful review information.
https://doi.org/10.1142/9789811223334_0019
BIM (Building Information Model) is a very effective technology in urban rail transit. In order to effectively support collaboration in BIM projects among multispecialty, a new data exchange and sharing interaction is developed to facilitate collaboration interactions. A BIM cloud document sharing model based on inter-specialty collaboration with Revit is established and a special document extraction algorithm is proposed firstly. Furthermore, a technical scheme based on FastDFS and document relationship management is developed for distributed document management with MySQL on a BIM cloud document access center. Finally, the synchronization of design model, document update application and document version information are realized with Kafka message middleware. Thus, the platform can fully support BIM collaboration.
https://doi.org/10.1142/9789811223334_0020
The novel concept of Spherical Fuzzy Sets provides a larger preference domain for decision makers to assign membership degrees since the squared sum of the spherical parameters is allowed to be at most 1.0. Spherical fuzzy sets are a generalization of Pythagorean Fuzzy Sets, picture fuzzy sets and neutrosophic sets. Spherical Fuzzy Sets are newly developed one of the extensions of ordinary fuzzy sets. In this paper, we proposed a MCDM method based on spherical fuzzy information. The method uses entropy theory to calculate the criteria weights, and calculates the similarity ratio of alternatives by using cosine similarity theory. Then alternatives are ranked according to their similarity ratio in descending order. To show the applicability of the proposed method, an illustrative example is given. We conclude that the proposed method is a useful tool for handling multi-period decision making problems in spherical fuzzy environment.
https://doi.org/10.1142/9789811223334_0021
Multi Criteria Decision making (MCDM) problems have been handled with fuzzy sets in order to obtain results in uncertain environment. Picture fuzzy sets (PFSs) is relatively new method for MCDM problems. COmbinative Distance-based ASsessment (CODAS) is a MCDM method and it is used in literature. In this study, a new method is called Picture Fuzzy CODAS is proposed.
https://doi.org/10.1142/9789811223334_0022
With the emerging technologies, collecting and processing data about the behaviors of customers or employees in a specific location has become possible. The purpose of this paper is to evaluate existing data collection technologies. Technology evaluation problem is handled as a multi-criteria decision-making (MCDM) problem. In this manner, a decision model containing four criteria and four alternatives are formed. In this study, we use Spherical Fuzzy TOPSIS method to solve the technology selection problem.
https://doi.org/10.1142/9789811223334_0023
Solar energy is a reliable energy source that is important for the environmental sustainability. Solar energy investments become crucial for the countries like Turkey that does not have natural fossil based fuels but has a great potential of solar energy. In order to give better solar energy investment decisions, the uncertainties and ambiguities inherit in the investment decisions should be considered. Spherical fuzzy sets enable us to deal with uncertainty and ambiguity. The membership, non-membership and the hesitancies are considered in Spherical fuzzy sets, In this study a Spherical fuzzy set based net present worth analysis is proposed to deal with uncertain solar energy investments. The applicability of the proposed model is shown by an application. In this application, a solar power plant investment is evaluated by using the proposed model.
https://doi.org/10.1142/9789811223334_0024
Intuitionistic fuzzy sets are the main source of several recent extensions of the ordinary fuzzy sets such as Pythagorean fuzzy sets, picture fuzzy sets, neutrosophic sets and q-rung orthopair fuzzy sets. In addition to these, one of the latest extensions of these types of fuzzy sets is Spherical fuzzy sets, which have been often employed in many multi-attribute decision making applications in the literature. One of the advantages of Spherical fuzzy set is that it provides a larger domain for the definition of the parameters (membership & non-membership functions and hesitancy) and it allows the parameters to be defined independently. Analytic Hierarchy Process (AHP) is one of the multi-attribute decision making methods based on pairwise comparisons of criteria and alternatives, satisfying the consistency of each pairwise comparison matrix. WASPAS (The Weighted Aggregates Sum Product Assessment) being the integration of weighted product sum and simple additive weighting methods, has been recently introduced to the literature. Under impreciseness and vagueness, linguistic evaluations are generally preferred in a decision matrix. There are many fuzzy extensions of AHP & WASPAS methods such as intuitionistic fuzzy AHP, intuitionistic fuzzy WASPAS, and Pythagorean fuzzy AHP. However, there is no paper integrating AHP and WASPAS methods using Spherical fuzzy sets. Thus, this paper aims to contribute to the literature by developing an integration of Spherical fuzzy AHP &WASPAS methods. The proposed method is applied to solve outsource manufacturer evaluation and selection problem.
https://doi.org/10.1142/9789811223334_0025
Intuitionistic fuzzy sets (Atanassov, 1986) are the sets whose elements have degrees of membership and non-membership satisfying the condition that their sum is at most equal to one. Thus, an expert can express his/her hesitancy through IFS whereas it is not possible with ordinary fuzzy sets. This paper presents a literature review on intuitionistic fuzzy sets based on some classifications with respect to type of intuitionistic fuzzy numbers. We use graphical and tabular illustrations for a better visualization of the review. We also propose a new extension of IFS whose name is q-Spherical fuzzy sets.
https://doi.org/10.1142/9789811223334_0026
Spherical fuzzy sets are the latest extension of the ordinary fuzzy sets, which are based on three independent parameters, namely membership degree, non-membership degree and hesitancy degree. The squared sum of these parameters must be equal or less than “1”. Several papers have been published on the arithmetic operations, aggregation operators, defuzzification operations and many others in the literature. However, there are still gaps in these operations for different types of spherical fuzzy numbers such as triangular, trapezoidal, and left-right (LR) fuzzy numbers. In this paper, we focus on score and accuracy functions for several types of spherical fuzzy numbers by illustrating numerical examples.
https://doi.org/10.1142/9789811223334_0027
Intuitionistic fuzzy sets (IFS) have been very popular in the literature after they were introduced by Atanassov (1986). Several extensions of IFS have been proposed such as Pythagorean fuzzy sets (PFS), fermatean fuzzy sets (FFS), q-rung orthopair fuzzy sets, picture fuzzy sets (PiFS), and spherical fuzzy sets (SFS). The main difference among these extensions is the number of independent membership function parameters such as membership degree, non-membership degree, and hesitancy degree. Similar to q-rung orthopair fuzzy sets, picture fuzzy sets can be extended to q-spherical fuzzy sets. This idea is handled in this paper and some operations of q-Spherical fuzzy sets are presented.
https://doi.org/10.1142/9789811223334_0028
Learning analytics is the measurement of student progress by the collecting, analysis and reporting data in the learning environment. Learning analytics methods try to find out the dependent pattern in dataset gathered. Learning Analytics improve student outcomes in several ways, first, using learning analytics leads to measure student success correctly. With this information students can find accurate teaching techniques and support themselves. Also, it gives proper and faster feedback about learning technique to the stakeholders (principals, teachers, parents). The scope and the aim of learning analytic projects may differ for different organizations. Selecting the right learning analytic project is crucial for the overall success of the learning process. Despite their benefits, while selecting the learning analytic projects not only financial benefits but also various factors including Privacy, Access, Transparency, Security, Accuracy, Restrictions, and Ownership should be taken into account. Yet, evaluating these factors is not easy since they involve uncertainties. Therefore, the selection of learning analytic projects is a complex process that includes various uncertainties. In this study, we utilize an Spherical fuzzy TOPSIS approach for selecting Learning Analytics Projects. This method enabled us defining the uncertainties with independent parameters.
https://doi.org/10.1142/9789811223334_0029
Control charts are one of the statistical process control methods that allow to see abnormalities related to process or product. Data collected when creating control charts are classified as quantitative or qualitative. Data collection is easier because quantitative data can be measured. It is more difficult to collect qualitative data than others. At this point, there are studies that use fuzzy sets for qualitative control charts. There are many studies especially about type-1 fuzzy numbers. Control charts have been created with more complex numbers, which have recently been the extension of fuzzy sets. When the literature was researched, no control charts created using spherical fuzzy numbers were encountered. In this study, global fuzzy c-control charts, which have not been in the literature before, were created.
https://doi.org/10.1142/9789811223334_0030
In this paper, the notions of L-fuzzy soft quantales and L-fuzzy soft ideals over quantales are proposed. Some properties of L-fuzzy soft quantales (ideals) related to operations of L-fuzzy soft sets are investigated. In the case of fixed parameters, order properties of all L-fuzzy soft quantales (ideals) over a given quantale are discussed. Moreover, the concept of L-fuzzy soft quantale homomorphisms is introduced. It is proved that inverse images of L-fuzzy soft quantales (ideals) under L-fuzzy soft quantale homomorphisms are L-fuzzy soft quantales (ideals). It is also proved that under some certain conditions, images of L-fuzzy soft quantales (ideals) under L-fuzzy soft quantale homomorphisms are L-fuzzy soft quantales (ideals).
https://doi.org/10.1142/9789811223334_0031
Nowadays, there is no data analysis scheme about the container aquaculture. In this paper, considering the data analysis demands, feedforward neural network (FFNN) based on back-propagation algorithm is built. The problems of classic BP neural network elements analysis models and optimization method is put forward. BP neural network aquaculture elements analysis optimized by Elicitation Johnson reduction algorithm using distinguishable matrix satisfy the experiment evaluation.
https://doi.org/10.1142/9789811223334_0032
ALL-SAT problem is one of SAT problem’s special forms, which searches all assignment which can satisfies the given clause set. Membrane computing is a branch of natural computing which can solve NP problems in polynomial time with a parallel compute mode. This paper proposes a new algorithm for ALL-SAT problem which combines the traditional membrane computing algorithm of ALL-SAT problem with a concept called full-length clauses, which can reduce the space complexity and simplify the structure of the algorithm significantly.
https://doi.org/10.1142/9789811223334_0033
In this paper, we introduce a new definition of formula dissimilarity to the problem of premise selection over large theories in the first-order logic: selecting the most relevant premises from a large-scale premise set for proving a given conjecture. Based on the previous work where we proposed a first-order term dissimilarity based on substitutions to capture the syntactic difference triggered by functional and variable subterms, we here extend the dissimilarity to first-order atoms and define a first-order formula dissimilarity by presenting a formula as an atom set.
https://doi.org/10.1142/9789811223334_0034
Resolution is a simple, reliable and complete reasoning rule in automatic reasoning. Contradiction is an important extension of the resolution principle. Based on the deductive reasoning of contradiction in propositional logic, this paper aims to propose the concepts of complete contradiction and study related properties. The main content are as follows: Firstly, some basic concepts are presented. Then the concept of complete contradiction and partly related properties were given. Finally, For the contradiction that adds a new clause to a complete contradiction, the non-extended change rule of its clauses and how to add literals to its clauses are presented.
https://doi.org/10.1142/9789811223334_0035
Guidance ability is one of the typical feature of the novel contradiction separation based automated deduction that extends the canonical resolution rule to a dynamic, flexible multi-clause deduction framework. In order to take better advantage of the guidance ability during the deduction process, we propose in this paper a clause reusing framework for contradiction separation based automated deduction. This framework is able to generate more decision literals, on which the guidance ability of the contradiction separation based automated deduction relies. Technical analysis along with some examples are provided to illustrate the feasibility of the proposed framework.
https://doi.org/10.1142/9789811223334_0036
In order to improve the dehazing effect of the foggy image, after optimizing the medium transmission rate using the guided image filter (GIF), the color of the partial area in the defogged image is unnatural and the profile of the object is not obvious, so a single image dehazing algorithm based on improved GIF is proposed. This paper first proposes an atmospheric light value optimization method based on pixel mean and defined threshold. Then, by introducing the first-order edge perceptual factor and the pixel position perceptual factor, a more accurate transmittance is obtained after improved GIF. The experimental results show that the algorithm can effectively correct the defogging effect of images containing white areas. At the same time, this algorithm can effectively improve the phenomenon of incomplete fogging, and the outline details of images are enhanced after defogging.
https://doi.org/10.1142/9789811223334_0037
The axiomatic definition of fuzzy set, which is based on the axioms of membership degrees, has been introduced by Xiaodong Pan and Yang Xu. On this basis, this paper focuses on establishing the axiomatical foundation of membership degree for fuzzy relations (binary fuzzy relation). The concept of two-dimensional vague partition is introduced, on this basis, the concept of fuzzy relation in Zadeh’s sense is redefined based on two-dimensional vague partition from the perspective of axiomatization. The results obtained in this paper can be easily extended to multiple fuzzy relations.
https://doi.org/10.1142/9789811223334_0038
Web service is the basic unit of service-oriented architecture (SOA), and it is a new type of Web application. Web service composition is a technology that forms a new service by combining multiple services, and the function of new service is more diverse. However, due to the complexity of the functions and the concurrency of the composite process, the new service may not be correct. Therefore, it is very important to verify the interactive behavior of Web service composition. A method for verifying the interaction of Web service composition by combining Petri net with symbol model checking is proposed. The conversion rules between Petri net model and NuSMV language are proposed. Among them, Petri net model is modeled based on BPEL process. Finally, the automatic verification and temporal logic verification of Petri net model of Web service composition are realized. Also, the feasibility and correctness of the method are illustrated by verifying the ShippingService.
https://doi.org/10.1142/9789811223334_0039
The emergence of pseudo-random number generator (PRNG) has got rid of the difficulty of obtaining real random numbers, and PRNG has become the main method of generating random numbers in modern technology. Because of the wide application of random numbers, the quality of a PRNG has always been a concern, and the periodic characteristics of pseudo-random number sequences are the key to ensure their quality. In this paper, the value of π is computed by Monte Carlo method, and the error of the operation result is used to judge the periodicity of PRNG. We analyzed the nature of the Monte Carlo method for calculating the value of π; discussed the relationship between the number of random points and the calculating error; introduced the principle of evaluating the period characteristics of the pseudo-random number sequence, and analyzed the periodic characteristics of the five PRNGs through the experiment data and drew some interesting conclusions.
https://doi.org/10.1142/9789811223334_0040
The hesitant fuzzy preference relation is useful for decision makers to provide pairwise comparisons over alternatives when they have some hesitancy, and how to improve the consistency level of a hesitant fuzzy preference relation has been widely discussed in the literature. In this paper, we focus on the analysis of the worst consistency level for a hesitant fuzzy preference relation. First, we propose some models to calculate the worst consistency level of a hesitant fuzzy preference relation by solving a mixed 0-1 linear programming model. Afterwards, to improve the worst consistency level of a hesitant fuzzy preference relation, another optimization model which aims to minimize the overall adjustment amount of the original hesitant fuzzy preference relation is developed, which is further transformed into an integer linear programming model. Finally, two numerical examples are provided to show the feasibility and effectiveness of the proposed models.
https://doi.org/10.1142/9789811223334_0041
XAI (eXplainable Artificial Intelligence) has been an important cross domain topic between social sciences and artificial intelligence. Especially in the field of Legal Judgment Prediction (LJP), the computer systems aim to predict the judgments based on the facts of legal cases. The features of the subject matters, the subjects’ behaviors, and the objective results are highly related to the crimes and punishments. Then the results should be coarsely explainable to people. However, many machine learning algorithms cannot make full use of such information and cannot give people the explaination for the results of LJP. In this paper, an Interpretable Conditional Classification Tree model (ICCT) is proposed to study the multi-class problem in LJP. Our model uses the prior information to recursively generate tree nodes. A feature search method for the feature domain construction, a data clustering algorithm and a grouping algorithm for tree node construction are proposed. The growth processes of the conditional classification tree realize the transition from coarse-grained classification to fine-grained classification which is called multi-granularity. The experimental results show the ICCT which has better interpretability achieves better performances over the baselines on the judgment prediction tasks.
https://doi.org/10.1142/9789811223334_0042
In order to attain the general goal of energy security and environmental protection in a balanced way, sustainable energy development should take into account not only energy saving, but also energy efficiency and flexible combination of different types of energy. The trend of future energy should therefore develop more renewable energy resources while transferring a centralized energy system to a clean and decentralized energy system. This paper described the concept, development status, development trends, benefits and challenges of DE systems, and proposed a performance evaluation model based on the multiple criteria decision analysis (MCDA) method which can be used to incorporate objectives in the decision making process of DE systems.
https://doi.org/10.1142/9789811223334_0043
Risk management process has an important role in the financial industry client relations. Due to the nature of risks, evaluation of qualitative and quantitative factors should be performed and combined for taking appropriate decisions. Fuzzy logic can be used for such purpose. The vagueness of the subject matter is based on the fact that most of the risk factors can be interpreted differently taking into account other circumstances directly or indirectly impacting the risk assessment. This paper introduces the methods for consolidation of risk factors by means of fuzzy scores and interval valued fuzzy sets. Interrelations between different risk factors are analysed. Aggregation of risk levels using t-conorms is proposed for obtaining the risk scores that serve as the basis for decisions in client servicing. These self-explanatory assessment and aggregation methods can be used in different stages of the risk management process.
https://doi.org/10.1142/9789811223334_0044
In real life, due to the complexity of objective things and the ambiguity of human thinking, people are often accustomed to expressing them with fuzzy linguistic values. To solve the decision-making problem with uncertain information of fuzzy linguistic values, this paper proposes a rule extraction method based on linguistic 3-tuple concept lattice. Introducing linguistic 3-tuple into formal context, the linguistic 3-tuple formal context and linguistic 3-tuple formal concept are proposed. Based on the linguistic 3-tuple formal context, we put forward the linguistic 3-tuple decision formal context and the algorithm of rule extraction based on linguistic 3-tuple concept lattice. Finally, the effectiveness and practicability of this model are illustrated by an example of student competition prediction system.
https://doi.org/10.1142/9789811223334_0045
The security of data flow is the core technology of data fusion and sharing service. This paper uses MPI communication mechanism for reference to meet the multi-party flow of trusted data, set up a multi-party security computing framework to form a privacy protection computing framework, and considers the actual needs of semantic security and efficient processing capacity. SMPC chooses ElGamal homomorphic encryption system. The paper defines addition and subtraction operations, multiplication operation, set operation, space vector operation and comparison operation. Respective operation process providing simultaneously. Based on multi-party computing, the application methods of data verification and correlation analysis are proposed, and the feasibility of the method is illustrated by the performance evaluation of the calculation.
https://doi.org/10.1142/9789811223334_0046
In real world, interpretation of natural data using a natural language is very popular. In our previous papers, we introduced mathematical definitions of generalized intermediate quantifiers, which were used for an analysis of natural data using fuzzy association rules. The main approach of this paper is to define new forms of intermediate quantifiers forming a general cube of opposition and applying the theory of syllogistic reasoning to derive new information which is not included in real data.
https://doi.org/10.1142/9789811223334_0047
The main goal of this paper is to introduce new forms of intermediate quantifiers forming a generalized cube of opposition as an extension of the generalized Peterson’s square of opposition. Namely, we utilize the theory of intermediate quantifiers, which provides mathematical interpretation of natural language expressions describing quantities such as “Almost all”, “A few” etc., to describe relationships in data using expressions that are common in human reasoning.
https://doi.org/10.1142/9789811223334_0048
With the notion of achieving the use goal as effective, we present the concept of data quality with “effectiveness of use” as the core, and try to establish the basic theory and method of data quality. On the basis of distinguishing a numerical data into amount and count, we define novelty and complete that are members of the goal set; establish the correlation functions between a goal and a dataset and data quality functions of the single-target and multi-target, which contain the specification and size of the data. Given examples verify that different targets of using data should lead to different evaluation results data quality, and show that the method proposed in this paper has a comprehensible quantitative form that is easy to implement by programming.
https://doi.org/10.1142/9789811223334_0049
This paper presents the structure of the transport logistics problem. It includes the mathematical description of the relational structure as well as a tabular representation of the model with its interconnections. The domain experts fill in the tables with the knowledge. It forms the basis for creating production rules which apply logical AND as the main operator. Such an approach provides knowledge processing of tasks of transport logistics problem with the use of weighted fuzzy Petri nets in the corresponding software PNeS.
https://doi.org/10.1142/9789811223334_0050
In the field of directed networks, we focus on the information about the connections among the nodes and the direction of the edges. Hence, we define two different relations between individuals: coparenting relations and brotherhood relations. Then, we introduce two new fuzzy measures to model this extra information. Finally, we propose a particular application of these fuzzy measures in pattern-based community detection problems with additional information. We apply a modification of Louvain algorithm to consider this type of information. This work has a wide projection, with many applications in many different real-life problems, for example, in the search of patterns in Social Media sites, such as Twitter.
https://doi.org/10.1142/9789811223334_0051
In today’s business world, many companies and government agencies depend on the infrastructure of cloud services to host and process their information. Load processing of many cloud services is distributed in a dynamic manner that allows service providers to share cloud resources between different customers and this approach demands an efficient resource allocation to achieve an efficient resource allocation. In this paper, we introduce a predictive approach that uses data from an incoming network connection to predicting incoming load of a cloud. The predicted data will be used in a cloud simulator called CloudSim to simulate a cloud load in infrastructure and to achieve intelligent load balancing, Pairwise comparison approach will be used to find an optimal amount of resources for an incoming workload. This paper is an exploratory study on the predictive approach for the dynamic resource distribution of cloud services.
https://doi.org/10.1142/9789811223334_0052
The present paper evaluates the Mexican Stock Exchange and generates a ranking of companies in each subgroup of hierarchy criteria. This is an alternative approach compared with those traditional used from the portfolio theory. The present approach allows the analysis of subgroups of criteria, showing the investor the performance of the subset of criteria and the complete set of criteria considered. The present study opens a new approach to analyzes more information in the shares evaluation and regard criteria interactions.
https://doi.org/10.1142/9789811223334_0053
There are classifiers based on Beta statistical distribution, but most of them assume data collected without errors and, in some cases, precision of information cannot be guaranteed. This paper presents a proposal of a new classifier named Fuzzy Beta Naive Bayes network (FBetaNB). The mathematical formalism is presented, as well as results of its application on simulated data. A brief comparison among FBetaNB, a classical Beta Naive Bayes classifier and a Naive Bayes classifier was performed. The results obtained showed that the FBetaNB is able to produce the best performance, according to the Overall Accuracy Index, Kappa and Tau Coefficients.
https://doi.org/10.1142/9789811223334_0054
With technological advances, virtual reality simulators have been developed and applied in several training activities. Methodologies to provide feedback about the training performed by user in such applications is a research area directly related to computational intelligence. They use interaction data as input for an assessment system integrated to the simulator. This paper proposed a Fuzzy Triangular Naive Bayes system for a Single User’s Assessment System (SUAS). Its application is interesting when interaction data can be modeled by triangular distribution. Results about this SUAS performance, obtained over simulated data and considering online assessment, were satisfactory according to Kappa Coefficient when compared to other methods.
https://doi.org/10.1142/9789811223334_0055
Developing group recommender systems has been a vital requirement due to the prevalence of group activities. However, existing group recommender systems still suffer from data sparsity problem because they rely on individual recommendation methods with a predefined aggregation strategy. To solve this problem, we propose a cross-domain group recommender system with a generalized aggregation strategy in this paper. A generalized aggregation strategy is developed to build group profile in the target domain with the help of individual preferences extracted from a source domain with sufficient data. By adding the constraints between the individual preference and the group profile, knowledge is transferred to assist in the group recommendation task in the target domain. Experiments on a real-world dataset justify the effectiveness and rationality of our proposed cross-domain recommender systems. The results show that we increase the accuracy of group recommendation on different sparse ratios with the help of individual data from the source domain.
https://doi.org/10.1142/9789811223334_0056
Cross-domain recommendation has been proved to be an effective solution to the data sparsity problem, which commonly exists in recommender systems. However, a challenging issue remains to be studied: how to transfer valuable knowledge from multiple source domains and balance the effect of them to the target domain under a sparse setting. To handle the issue, we develop a multi-source shared cross-domain recommender system, which aims to extract shared latent features from multiple domains to assist the recommendation task in a sparse target domain. It’s achieved through a multiple domain-shared autoencoder and an attentive module. Then we further propose an enhanced method by making it specific to each user so that it can provide personalized services. Experiments conducted on real world datasets show that the proposed methods perform well and improve the accuracy of recommendations in the target domain even though the datasets are quite sparse.
https://doi.org/10.1142/9789811223334_0057
As a new channel of job seeking, online recruitment platforms and their job recommender systems have shown importance to applicants. However, existing recommendation methods endure limitation in effectiveness for their lack of consideration for employers feedback and behavioral information. Taking two-sided matching and diversity into account, this paper proposes a machine-learning based job recommendation method, namely Job-PI, to synthetically optimize both applicant preferences and employer interests. Experiments on both simulation and real-world data show the effectiveness and superiority of Job-PI over other methods.
https://doi.org/10.1142/9789811223334_0058
Aiming to recommend potential collaborators for academic entities such as researchers and institutions, this paper develops a social recommender system through bibliometric indicators and network analytics. Targeting to scholarly articles, the proposed recommender system exploits co-authorships as established social relations and proposes a link prediction model for discovering such potential relations in terms of a co-authorship network. A case study recommending scientific collaborators for research entities on generelated diseases demonstrates the reliability of this study.
https://doi.org/10.1142/9789811223334_0059
In the fast pace of life, E-learning has become a new way for self-improvement and competitiveness. The recommendation is needed in an E-learning system to filter suitable courses for users when they are facing a massive amount of information in course enrolment. However, due to the complexity of each learning course and the change of user interest, it is challenging to provide accurate recommendations. This paper proposes an E-learning recommender system that combines the recurrent neural network (RNN) and content-based technique to support users in course selection. The content-based techniques are to mine the relationships between courses, and the recurrent neural network is to extract user interests with a series of his/her enrolled courses. The proposed E-learning recommender system framework takes sequential connections into consideration. It intends to provide students with more precise course recommendations. The system is implemented with the Django framework and ElephantSQL cloud database and deployed on the Amazon Elastic Compute Cloud.
https://doi.org/10.1142/9789811223334_0060
Recommender System has been widely adopted in real-world applications. Collaborative Filtering (CF) and matrix based approach has been the forefront for the past decade in both implicit and explicit recommendation tasks. One prominent challenge that most recommendation approach facing is dealing with different data quality conditions. I.e. cold start and data sparsity. Some model based CF use condensed latent space to overcome sparsity problem. However, when dealing with constant cold start problem, CF based approach can be ineffective and costly. In this paper, we propose MERec, a novel approach that adopts graph meta-path embedding to learn item/user features independently besides learning from user-item interactions. It allows unseen data to be incorporated as part of user/item learning process. Our experiments demonstrated a effective impact reduction in cold start scenario for both new and sparse dataset.
https://doi.org/10.1142/9789811223334_0061
Disrupted urban rail services are routinely experienced by rail passengers throughout the world. To minimize the impacts to passengers, transport providers will temporarily run rail replacement bus services that use buses to replace passenger trains. This paper presents a novel data-driven bussing optimization system that can efficiently, reliably, and dynamically determine the replacement bus timetables. The system first infers travel behaviour of train passengers via data mining, then it formulates the route selection problem as a multi-objective optimization problem which is solved in a meta-heuristic way. Finally, a demand-driven scheduling approach is developed to generate the replacement bus timetable. This data-driven bussing optimization system is jointly developed and implemented by University of Technology Sydney and Sydney Trains. Deployment results show that the system not only saved cost for the transport operators, but also significantly improved customer experience in New South Wales, Australia.
https://doi.org/10.1142/9789811223334_0062
In recent years, many e-commerce sites allow users to evaluate multiple aspects of a product, which provides a more comprehensive user preference information for the recommendation system and helps improve the recommendation quality. Therefore, multi-criteria recommender systems have increasingly attracted researchers’ interest in recent years. In this paper, we propose an information entropy based multi-criteria recommendation algorithm. The results from experiments based on Yahoo Movie dataset show that the new method can improve the recommendation accuracy of recommender systems.
https://doi.org/10.1142/9789811223334_0063
Clinicians make decisions that affect life and death, quality of life, every single day. It is important to support clinicians by discovering medical knowledge from the accumulated electronic health records (EHRs). The integration of genomic information and EHRs are long recognized by the medical community as the inherent feature of the disease. The demand for developing a clinical recommender system that is able to deal with both genomic and phenotypic data is urgent. This paper proposes a framework of clinical recommender system with genomic information, which is used in the clinical process and connects the four types of users: clinicians, patients, clinical labs, researchers. With models and methods in artificial intelligence (AI), five functions are designed in this framework: diagnosis prediction, disease risk prediction, test prediction, and event prediction. The proposed framework will help clinicians to make decisions on the next step in clinical care action for patients.
https://doi.org/10.1142/9789811223334_0064
Association rules are rules that define relationships between items in sales databases. They have been used primarily to organize relevant products in stores in a way to makes them more visible to consumers, which may increase sales and profits. On the other hand, it has been rarely used in recommender systems where algorithms provide instant recommendations by processing consumers’ interests that are gathered when browsing online. However, the vast amount of information collected from transaction data saved on backup servers is poorly taken advantage of, because it is not connected to the Internet, although interesting and personalized recommendations can be created after finding the collections of most frequent items, or most interesting rules in such databases. In this paper, we do a critique of the existing research on both recommender systems along with showing their drawbacks, and the association rules with detailed explanations on their advantages. Finally, draw up with several solutions for producing high quality as well as accurate recommendations by applying novel combinations of techniques observed in this research area including the association-rules-based recommender systems.
https://doi.org/10.1142/9789811223334_0065
The purpose of this paper is to explore the relationship among user experience, perceived value and purchase intention, which have recently been widely concerned in the field of fashion e-commerce, especially sportswear. Taking sportswear as the object, 265 valid samples were obtained through survey, and then hypothesis models were validated. In study 2, 20 participants were recruited to participate in the between-subject eye-tracking experiment. The results of study show that sensory experience, thinking experience and affective experience have positive effects on perceived value and purchase intention, and perceived value plays a mediating role. The findings illustrate that sportswear companies can pay attention to some key operational points, including focus on products, import the real scene, rationally organize the details of the text and so on.
https://doi.org/10.1142/9789811223334_0066
This paper introduced a reinforcement learning based decision support system in textile manufacturing process. A solution optimization problem of color fading ozonation is discussed and set up as a Markov Decision Process (MDP) in terms of tuple {S, A, P, R}. Q-learning is used to train an agent in the interaction with the setup environment by accumulating the reward R According to the application result, it is found that the proposed MDP model has well expressed the optimization problem of textile manufacturing process discussed in this paper, therefore the use of reinforcement learning to support decision making in this sector is conducted and proven that is applicable with promising prospects.
https://doi.org/10.1142/9789811223334_0067
The DCGAN image generation module could be applied to personalized fashion custom design, but how to generate more personalized fashion design is a very important problem. In this paper, we present a new method for optimizing the personalized problem by using customer’s judgment. Customer selected images were put into training data for iterative training. This method was verified in our personalized little black dress interaction design, after several iterations, the consumer’s satisfaction of the generated image by DCGAN has been improved. This method can also be used to other styles of fashion design.
https://doi.org/10.1142/9789811223334_0068
With the advent of the big data era, this paper presented a new approach of generating garment ease allowance using factor analysis-based multilayer perceptron artificial neural network for garment personalization, aiming to realize the intellectual pattern garment development from 3D anthropometric big data. Firstly, the anthropometric experiment was conducted by the whole-body scanner. And then, the original anthropometric data were analyzed by factor analysis, aiming to reduce the dimension and identify feature measurements. Meanwhile, the pants block was taken for instance in this study. The garment ease values for each subject were acquired through a process of individualized pattern making and fitting. Afterward, the multilayer perception artificial neural networks model was established and simulated for generating the ease allowance. Through linear regression analysis and fitting test, the results revealed that the performance of the present approach was feasible, and it could create the garment ease values fast and precisely relatively.
https://doi.org/10.1142/9789811223334_0069
With the evolution of manufacturing industry, traceability is becoming one of the fundamental elements of the modern and sustainable manufacturing processes. However, the traceability information platform under a collaborative and integrated manufacturing environment has not been sufficiently addressed. In this paper, the general architecture of a traceability information management platform is proposed for the manufacturing application scenarios, which consists of three hierarchical layers: object configuration and data collection layer, data management layer, as well as data analytics and application layer. The platform is designed for real-time information capturing and integration and to establish the data foundations for potential applications in data-driven decision-making and process optimization. The proposed solution has been implemented in a textile dyeing production line, and realised manufacturing data collection and management with product traceability services, which showed the feasibility and significance of the proposed framework.
https://doi.org/10.1142/9789811223334_0070
With a tremendous increase in mobile and wearable devices, the study of sensor-based activity recognition has drawn a lot of attention in the past years. In recent years, the applications of Human Activity Recognition are getting more and more attention, especially in eldercare and healthcare as an assistive technology when combined with the Internet of Things. In this paper, we propose three deep learning approaches to improve the accuracy of activity detection on the WISDM dataset. Particularly, we apply a convolutional neural network to extract the interesting features, then we use softmax function, support vector machine, and random forest for classification tasks. The results show that the hybrid algorithm, convolutional neural network combined with the support vector machine, outperforms all the previous methods in classifying every activity. In addition, not only the support vector machine but also the random forest shows better accuracy in classification task than the neural network classification and the former approaches do.
https://doi.org/10.1142/9789811223334_0071
It is true that anomaly detection is an important issue that has had a long history in the research community due to its various applications. Literature has recorded various Artificial Intelligence (AI) techniques that have been applied to detect anomalies without having a priori knowledge about them. Anomaly detection approaches for multivariate time series data have still too many unrealistic assumptions to apply to the industry. Our paper, therefore, proposed a new efficiency approach of anomaly detection for multivariate time series data. We specifically developed a new hybrid approach based on LSTM Autoencoder and Isolation Forest (iForest). This approach enables the advantages in extracting good features of the LSTM Autoencoder and the good performance in anomaly detection problems of the iForest. The results show that our approach leads to the improvement of performance significantly in comparison with the One-Class Support Vector Machine (OCSVM) method. Our approach is implemented on simulated data in the fashion industry (FI).
https://doi.org/10.1142/9789811223334_0072
In this paper, we originally propose a new predicting model to the fashion features of fabrics. It enables to select fabrics satisfying fashion demand according to a small amount of technical parameters which is easy to be measured. For this, we set up three mathematical models. By using fuzzy technique, we first define several fuzzy sets to express measured technical parameters and sensory properties of fabrics. Then, we set up the relational model between the technical parameters and sensory properties by using the rough set method. Next, we set up the relational model between the fashion themes (to express the fashion features of fabrics) and sensory properties by using fuzzy technique. Combine with the two models, we establish the relational model between fashion themes and technical parameters. The proposed model has been validated through a successful real design case.
https://doi.org/10.1142/9789811223334_0073
This study presents a novel taxonomy of short message service campaigns, for the purpose of building an intelligent marketing system. The main issue of mass marketing is that one size does not fit everybody. In other words, it is challenging to meet different consumer needs. With the help of artificial intelligence, marketers can be supported to overcome some of these challenges. This study uses a mixed methods approach where design science and grounded theory is used to produce a short message service campaign taxonomy for a future intelligent marketing system. Data collection consisted of 386 previously active campaigns used over 33 months to build the taxonomy. An experimental study was conducted to test the effectiveness of the proposed taxonomy. The experiments involved automatic generation of campaign messages. The validity of these campaign messages, and hence the proposed taxonomy, was ascertained by analysing the messages within a business context. The study concludes that the system, intertwined with the taxonomy, performs comparably to a regular campaign. Another proof of concept is that the business context deemed the generated campaign texts to be both semantically and syntactically similar to run them in active campaigns as experiments.
https://doi.org/10.1142/9789811223334_0074
Recommendation systems in fashion are used to provide recommendations to users on clothing items, matching styles, and size or fit. These recommendations are generated based on user actions such as ratings, reviews or general interaction with a seller. There is an increased adoption of implicit feedback in models aimed at providing recommendations in fashion. This paper aims to understand the nature of implicit user feedback in fashion recommendation systems by following guidelines to group user actions. Categories of user actions that characterize implicit feedback are examination, retention, reference, and annotation. Each category describes a specific set of actions a user takes. It is observed that fashion recommendations using implicit user feedback mostly rely on retention as a user action to provide recommendations.
https://doi.org/10.1142/9789811223334_0075
The precise subdivision of body shape is the basis of fashion design. Reasonable clothing style is the necessary prerequisite to improve clothing comfort. The fit of the armhole is one of the most important factors affecting the comfort of men’s upper garment. The shape of the armhole is closely related to the morphological characteristics of arm root. However, due to the difficulty of data acquisition in the measurement of arm root, it is often neglected. In order to improve the fitness of men’s upper garment, this paper focuses on the morphological characteristics of the arm root. According to the human skeletal characteristics and the experience of experts, the upper body data of 37 male college students and graduate students aged 18–25 in China were measured by 4 kinds of artificial measurement methods. According to the analysis results, combined with expert experience, the arm root position is divided into 7 categories. By comparing with the traditional classification results, the rationality of the method is proved, and the classification index is given.
https://doi.org/10.1142/9789811223334_0076
This paper propose a new approach to Thai plagiarism detection. This approach uses co-occurrence graphs as a corpus to measure the distance between words and find centroid terms of documents. In this approach, suspicious documents and source documents are represented by a sequence of difference centroid terms, then every sequence of centroid terms in suspicious documents is compared to that of the source documents. Therefore, the word count metric will be applied to measure the similarity between documents. The main advantage of the proposed method is to classify texts by relying on information about semantic distances and similarities. Moreover, it uncovers a topical relationship between documents even if their wording differs.
https://doi.org/10.1142/9789811223334_0077
Healthcare has always been one of the areas where the public and the government pay attention. In recent years, the use of modern technology in medicine has helped both the healthcare professionals and patients. Therefore, given the fact that these technologies are based on Internet and modern mobile devices, they allow a remote monitoring, that consists of using Internet of Things technology. With this technology, patients instead of staying in the hospital for hours and hours, can stay home and communicate with their doctors online. With this technology one can easily monitor the health status of patients in real-time. In this paper we develop a real-time multi parametric human health monitoring and prediction system. This system is inspired on/from a technique, which consists of tracking/collecting clinical data of a person in order to identify particular conditions that can help insuring fast predictability or awareness. The main advantages of the developed system are the novelty, multi functionality, and the availability at an affordable cost.
https://doi.org/10.1142/9789811223334_0078
In this paper the issues like preprocessing of medical data, reclassification of the training sets and determining the importance of classes, formation of reference tables, selection of an informative features set that differentiate between class objects, formed by medical professionals are discussed and solved. Mainly in the most studied references [5–8, 11–13] the Fisher’s criterion is used to obtain solutions to problems/tasks. Also for solving problems, the algorithms for an estimate calculation as well as the related software programs are used. For all cases, algorithms and software programs are suggested.
The study consists of two important steps. The first step is to build a reference table, based on the importance of the features and objects as well as their contribution to the classes [1–4, 9, 10]; the second step is concerned with the choice of the most useful characteristic features set to be investigated. This corresponds to solving the issue of selection of set of informative features from a given table, their visualization, and the determination of the contribution of the features set to the formation of classes [1–13].
https://doi.org/10.1142/9789811223334_0079
In this paper we propose a stochastic agent-based model (ABM) and simulation for controlling the TB dynamics in the population. Two population structures are proposed. The first is based on a mixed population while the second is based on a 4-level population. Simulations of the model with a mixed population structure confirm that good patient care strengthens a nation’s health system. Results obtained show that if patients are effectively treated over a long period of time, it is possible that this population may remain free of infection over a long period of time (several years) in the absence of immediate care. The results also confirm that the variation in the radius of contamination has an impact on the incidence and prevalence of the disease. The model proposed here is general and could be applied in several country without major changes.
https://doi.org/10.1142/9789811223334_0080
A new decentralized multi-sensors information fusion scheme is proposed in this paper, where each sensor exchanges messages with its neighbors and locally fuses the messages in the absence of the fusion center. A fusion algorithm is designed such that the locally fused messages of each sensor achieve consensus. The numerical example is illustrated to verify the effectiveness of the designed fusion algorithm.
https://doi.org/10.1142/9789811223334_0081
In view of the non-linear and non-stationary characteristics of the vibration signals of bearing faults and the difficulty to diagnose unknown compound faults, a novel unknown compound faults diagnosis based on graph convolutional network (GCN) is proposed in this paper. Firstly, the original vibration signals are transformed into spectrograms through wavelet transform and input to the GCN for learning. Then, 1% of the unknown compound faults are used for incremental strategy to fine-tune the model parameters in order to enhance the ability of generalization of our model. Finally, we diagnose different types of unknown compound faults on laboratory simulation data. The experimental results show that our method outperforms the state-of-the-art baselines in the diagnosis of unknown compound faults.
https://doi.org/10.1142/9789811223334_0082
Human pose estimation has made great progress with the rapid development of multifarious deep Convolutional Neural Network (CNN) models. However, most existing human pose estimation approaches often only aim at improving the model generalization performance, but ignore its efficiency. In this paper, we propose a faster hourglass network for predicting human pose. Specifically, we combine the mixed depthwise convolution strategy with the basic hourglass network to design new residual blocks. Furthermore, we employ two residual blocks at each location in the hourglass to improve network architecture. Extensive experiments show the advantages of our faster human pose network in comparison with various existing human pose estimation approaches in terms of model efficiency and model accuracy on two common benchmark datasets, MPII Human Pose and Leeds Sports Pose.
https://doi.org/10.1142/9789811223334_0083
Popular speech recognition models train large corpora to achieve high accuracy, but it is difficult for minority language to get speech recognition. It has been a challenging task in few-shot learning. MobileNetV2 belongs to sparse network which is benefit to reduce train parameters; meta learning imitates human learning ability which is an effective method to handle few-shot learning or unseen tasks. In this paper, We evaluated meta-metric learning on WA language datasets and compare it with MobileNetV2. Experimental results and comprehensive analysis show that the more train data are used, the better generalization is. Meta-metric learning gets better generalization and convergence rate than MobileNetV2 to recognize few-shot learning.
https://doi.org/10.1142/9789811223334_0084
Text classification is a fundamental task in Natural Language Processing (NLP). In this paper, we propose a Weakly-Supervised Character-level Convolutional Network (WSCCN) for text classification. Compared to the word-based model, WSCCN extracting information from raw signals. Further, through the combination of global pooling and fully convolutional networks, our model retains semantic position information from stem to stern. Extensive experiments on the most widely-used seven large-scale datasets show that WSCCN could not only achieve state-of-the-art or competitive classification results but show critical parts of the text for classification.
https://doi.org/10.1142/9789811223334_0085
Computer vision abilities of autonomous systems have a significant computational demand on the underlying hardware. We research on methods to reduce the amount of data provided to Artificial Neural Networks (ANN)-based image classification. We consider transformation techniques, as many visual sensors inherently provide hardware components for the Discrete Cosine Transform (DCT). The focus in this paper is on a fast prediction phase, at the cost of memory and a more sophisticated training phase. In particular, we partition data in the frequency domain, define the problem by mathematical programming and train a set of distinct ANNs. An online algorithm selects the smallest feasible ANN in the prediction phase.
https://doi.org/10.1142/9789811223334_0086
Various deep learning networks have been developed to revisit traditional learning tasks in machine learning. Graph convolutional networks (GCN) are powerful deep neural tools for semi-supervised classification on graph-structured data, which integrate graph local topology and vertex features in the convolutional networks. Despite their success, the current implementations have limited capability to deepen the network layer to expand receptive field and get global information. In this paper, we develop high-order graph convolutional networks to capture graph long-distance similarities. We propose a dropout method in edges to further improve the generalization ability. Empirical results show that our deep networks can achieve superior performance in semi-supervised classification.
https://doi.org/10.1142/9789811223334_0087
Aspect-Based Sentiment Analysis (ABSA) is a challenging task in recent natural language processing research. It aims at extracting people’s sentiment polarity on a specific category. In this paper, we explore that the key phrases in sentence are highly relevant to certain aspect words which influence the final result. We propose a joint LSTM with Multi-CNN network by hierArchical aTtention (MAT) model to achieve this goal. Specifically, we design a double embedding with fully connected layer module to improve the final performance. MAT shows the effectiveness on the experiments of datasets from SemEval 2014.
https://doi.org/10.1142/9789811223334_0088
ELECTRE III is well known and spread multi-criteria decision analysis (MCDA) method that has been successfully applied in many different decision making problems, but when such problems deal with qualitative information and its inherent uncertainty, there is not a suitable way to apply it without simplifying input assessments. Therefore, in this paper, we propose a linguistic extension of the ELECTRE III method based on the 2-tuple linguistic representation model to the decision-maker to provide his/her evaluations in by means of fuzzy linguistic terms according to his/her knowledge about either the problem or a specific element of the problem. The new method uses a linguistic based distance measure and is appropriate for multi-criteria ranking problems dealing with qualitative criteria and uncertain information. Our proposal consists of developing tools and operators for the ELECTRE III method to deal with linguistic information.
https://doi.org/10.1142/9789811223334_0089
Aiming at alleviating the load of the maternity hospital, online diagnosis for pregnancy consultations is urgently needed. In this paper, we focus on extracting knowledge from clinical data to guide conversations between humans (pregnant women) and machines (medical knowledge base). To achieve this, a fuzzy binary decision tree model is used. The experimental results show that this model outperforms others and the generated decision tree could successfully guide the process of pregnancy consultations.
https://doi.org/10.1142/9789811223334_0090
Supplier selection is the process of selecting, based on several criteria, a proper supplier to get the necessary materials to support companies outputs. However, the environmental situation in the world is becoming more delicate and countries force companies to comply with several green policies. For this reason, the supplier selection has evolved to the green supplier selection, in which environmental criteria are considered. Multi-criteria decision making models have been applied to solve supplier selection problems but, these proposals do not provide a proper modeling of the information either in inputs and/or outputs. This paper aims to apply the ELICIT linguistic model in green supplier selection in order to obtain precise and understandable results by overcoming previous proposals.
https://doi.org/10.1142/9789811223334_0091
The problem of a justified use of Fuzzy MCDA (FMCDA) models, which are fuzzy extensions of a classical MCDA method, is explored for Fuzzy MAVT models as an example. The use of such models under conditions of distinctions in ranking of alternatives and violation of the basic FMCDA axiom forms a problem that the authors call as a presumption of model adequacy.
https://doi.org/10.1142/9789811223334_0092
Product Development (PD) is a very crucial area for companies that want to hold a competitive and strategic advantage in the market. PD partner becomes the long-term partner; hence its selection is a critical process. In this paper, the PD evaluation and selection procedure are approached as a multi-criteria decision making (MCDM) process in which the ELICIT information-based framework is suggested. The main aim of the process is to create a flexible decision-making environment for decision-makers with linguistic expressions. The recommended methodology is tested with a case study by a Turkish agriculture firm, and the results are presented in the paper.
https://doi.org/10.1142/9789811223334_0093
Decarbonising emissions-heavy industrial sectors is key to delivering on the Paris Agreement. In Austria, the iron and steel sector holds a large share of the country’s greenhouse gas emissions and is in need of introduction of new technologies, orienting on green hydrogen and renewable energies. Acknowledging that such a transition features diverse exogenous risks and possible consequences, our research attempts to prioritise the risks associated with a pathway promoting a low-carbon iron and steel sector in Austria, from the stakeholders’ perspective. We use a 2-tuple TOPSIS model and carry out group decision making based on the Computing with Words methodology.
https://doi.org/10.1142/9789811223334_0094
Visual question answering is a cross-modality task which needs to simultaneously understand multi-modality inputs and then reason to provide a correct answer. Nowadays, a number of creative works have been done in this field. But most of them are not real end-to-end systems, as the commonly used finetuning technique for the module of convolutional neural network usually makes the system stuck into a local optimum. This problem is concealed by using image features only in most of the current works limited by computing ability. The development of graphic processing unit offers an opportunity to solve this problem. In this work, the challenge is analysed and an effective solution is proposed. Experiments on two public datasets are conducted to demonstrate the effectiveness of the proposed method.
https://doi.org/10.1142/9789811223334_0095
Dense video captioning is a high-level visual understanding task, which dedicates to semantic interpretation for events in a video. Nowadays, Transformer is widely employed for this task in consideration of its high parallelism and leverage on the long-term dependency. However, the relevance of different visual attributes is usually limited by absolute position embedding in current works. To address this problem, a novel method of designing a position embedding fusion module is proposed for dense video captioning. Experiments on the public ActivityNet Captions dataset demonstrate the effectiveness of the proposed method on enhancing the correlation of individual event, with 10.3635 (2018) and 7.2181 (2019) on the METEOR metric.
https://doi.org/10.1142/9789811223334_0096
The task of referring relationships in images aims to locate the entities (subject and object) described by a relationship triple < subject − relationship − object > in images, which can be viewed as a retrieval problem between structured texts and images. However, existing works extract features of the input text and image separately, leading to capture the correlations between these two modalities insufficiently. Moreover, the attention mechanisms used in cross-modal retrieval tasks do not consider local correlation in images. To address these issues, a cross-modal similarity attention network is proposed in this work, including a cross-modal metric learning module and a cross-modal local attention module. The cross-modal metric learning module adaptively models the similarity between query text and input image, and refines image features to obtain cross-modal features. Regarding the cross-modal local attention module, it concentrates on the query entity in cross-modal features both on image channels and spatial local regions. The experiments demonstrate the superiority of the proposed approach as compared with current powerful frameworks on two challenging benchmark datasets — Visual Genome and VRD.
https://doi.org/10.1142/9789811223334_0097
Video prediction has recently drawn more attention for its application potential. However, it is challenging to model long-term prediction since it has to predict dense pixels along both spatial and temporal dimensions. Several recent approaches for long-term video prediction view pixel transforming as a global process among adjacent frames, while the actual position and motion of pixels in real videos are arranged in a hierarchical manner. Inspired by this, a novel hierarchical prediction model is proposed in this work to decompose complex and composite motions of real videos into simple ones based on their locations. This will reduce learning difficulty and fit various movements as well. In addition, high-resolution videos which are harder to model are also investigated, since there are larger ranges of movement and much more details to take care of. The proposed model builds upon a spatial transformer predictor to realize hierarchical structure to learn motions from videos. The experimental results on the benchmark real-world video dataset Human3.6M demonstrate the effectiveness of the proposed model as compared with other baseline approaches.
https://doi.org/10.1142/9789811223334_0098
The hidden layer neurons and network weights are learned separately in the existing radial basis function neural network (RBFNN). When addressing the classification tasks, the learning process of neurons is not closely related to the performance of the classification. To overcome this problem and obtain a better network structure, a sparse polynomial radial basis function neural network in unit hyperspherical space (SPRBFNN-UH) is proposed. In the proposed SPRBFNN-UH, the hidden layer neurons and network weights are trained simultaneously for the purpose of classification. Moreover, the added sparse constraint can get better sparse effect, which results in the sparse part being close to zero and the non-sparse part being enhanced. The proposed sparse network structure can obtain a better sparse structure and improve the classification accuracy. In the experiment, three databases are selected to evaluate the sparsity and the classification rate. The experimental results show that this algorithm is superior to five related algorithms.
https://doi.org/10.1142/9789811223334_0099
The Convolutional Neural Networks (CNN’s) have made incredible progress in numerous research areas. However, the exponential development of digital images causes over-burdening due to irrelevant features, heavy redundancy and noisy data. Hence, affecting the processing speed of the CNN and its classification accuracy. In this study, a novel reduction algorithm using rough set theory with no information loss has been proposed as a data pre-processor for CNN. The proposed algorithm is reducing the data by feature reduction and noisy sample reduction. The Rough set could sufficiently choose the noisy boundary samples to take out based on KNN rules, having mislabeled classes. Experiments have been performed to demonstrate the proposed which can increase the overall performance of convolutional neural networks.
https://doi.org/10.1142/9789811223334_0100
Readmission to intensive care units (ICU) in sepsis patients is costly and associated with poor patient outcomes. Prediction of ICU readmission in patients with sepsis is of particular interest for physicians as they can take early interventions, which would help to optimize the ICU discharge time and avoid deterioration of patient conditions. In this study, we used an openly accessible database to identify patients with sepsis and built models for 30-day ICU readmission prediction. In order to reduce the high dimensionality of the dataset and avoid model overfitting, we adopted two feature selection approaches, sequential forward selection and recursive feature elimination to identify a subset of the most predictive features. After feature selection, we built ICU readmission prediction models for sepsis patients using eXtreme Gradient Boosting (XGBoost), Multi-Layer Perceptron and Logistic Regression. We compared prediction performance of the three models and found that XGBoost-based model has the best discrimination. This model has the potential to aid ICU physicians in identifying sepsis patients with a high risk of ICU readmission and thus may help improve the clinical outcomes.
https://doi.org/10.1142/9789811223334_0101
This study aimed to construct an ensemble learning-based model for predicting prolonged length of stay in intensive care unit (pLOS-ICU) for general ICU patients. We used medical information mart for intensive care (MIMIC) III database for model development and validation. We constructed five models: customized simplified acute physiology score (SAPS) II model, classification and regression trees (CART) model, random forest (RF) model, adaptive boosting (AdaBoost) model, and light gradient boosting machine (LightGBM) model for pLOS-ICU prediction. A five-fold cross validation was adopted to evaluate prediction performance of the five models. Results suggested that the LightGBM model achieved the best overall performance, discrimination, and calibration among the five models. The calibration curve of the LightGBM model was an optimal fitting. The LightGBM-based pLOS-ICU prediction model has great potential to support ICU physicians in patient management and medical resource allocation.
https://doi.org/10.1142/9789811223334_0102
The evidential reasoning (ER) rule for multi-attribute classification has been recently developed, and has enhanced the properties of the Dempster’s rule through defining the weight and reliability of the evidence. This paper aims to introduce the ER rule for data classification and ensemble learning in the domain of trauma research. Moreover, it aims to identify multiple methods for building trauma prediction models, and to increase model accuracy in order to enhance the care services provided to trauma patients. The model proposed in this paper includes age, gender, age and gender combined, injury severity score, Glasgow coma scale, and modified Charlson comorbidity index as predictors of patient outcomes at 30 days or at discharge, whichever occurred first. The results of some machine learning (ML) algorithms such as decision tree, random forest, and artificial neural networks are compared to logistic regression results. The area under the curve (AUC) result of the artificial neural networks algorithm is 0.9076, which outperforms that of the logistic regression presented in the paper, i.e. 0.9045. Moreover, the application of the ER rule for ensemble learning indicates adequate prediction performance.
https://doi.org/10.1142/9789811223334_0103
A patient-oriented model of assessment of cardiovascular (CV) health of men obtained as a result of medical observation and the observation itself is considered. The specificity of proposed methodology is determined by orientation toward men, by focus on selfobservation of CV health, by composition of indicators of men’s CV health as well as forms and methods of their estimation.A technique TAMECH of assessment of men’s cardiovascularhealthbased on a patient-oriented model, the theory of fuzzy sets, a formal conceptual analysis and linguistic summary is proposed and its application is considered.
https://doi.org/10.1142/9789811223334_0104
Image Processing are still being challenged by noises. Noise causes the intensity manipulation of the image. So, removing or reducing the noises from the image is a must before working with it. It is an active area of research because none of the established or proposed noise reducing methods can return back the original image. And also, there are different types of noises. Different proposed algorithms work fine with different types of noises and also up to a certain level of noises. In this paper, an adaptive noise removal algorithm is proposed which works fine with impulse noises and does not blur the edges of the inputted image. While removing the noises, the algorithm uses an adaptive mask which is n × n square or cross musk, n is usually an odd number. Our proposed algorithm has achieved 15.38 dB (Peak Signal to Noise Ratio) outperforming the existing filters.
https://doi.org/10.1142/9789811223334_0105
Nowadays, machine learning is considered as one of the most popular fields in computer science that have shown a great success in the classification and prediction areas. In this paper, we focus on Artificial Immune Recognition System (AIRS) which is a supervised learning method inspired by the immune system metaphors. Such technique has been applied in various areas and has shown promising prediction results. Yet, different versions of AIRS methods have been proposed such as AIRS2 and AIRS3. Nevertheless, these two versions have some limits which are mainly related to their inability to work under uncertainty. This is considered as a big challenge in real-world classification problems. For this purpose, this paper contributes to the problem of covering uncertainty in the classification process using the belief function theory where a new machine learning approach called WE-AIRS (Weighted Evidential AIRS) is proposed. In WE-AIRS, the number of training antigens represented by each memory cell is taken into account and the classification of the antigens is performed based on their derived weights. The performance of the new weighted evidential method is validated on five real world data sets and compared to the other traditional AIRS versions.
https://doi.org/10.1142/9789811223334_0106
With the powerful deep neural network, many ship detection methods rely heavily on large sets of labeled data due to the complexity of the marine environment and the marine targets. However, making large labeled datasets is a very costly and time consuming task while unlabeled data is easy to obtain with the continuous acquisition of image data at maritime surveillance. Inspired by the virtual adversarial training which smooths the label distribution given input, we propose a class-coordinate adversarial regularization (CCAR) to detect ships in a semi-supervised manner. The CCAR consists of class adversarial regularization and coordinate adversarial regularization, that are respectively used to achieve the local smoothness of the label distribution and position distribution of the disturbed targets. We use region proposal network (RPN) to generate ship proposals and take a modified Wide Residual Network as the network backbone, which reduces the detection problem to classification problem. Experimental results show that the ship detection model based on the proposed CCAR achieves the better mean average precision (mAP) of 0.797 compared with that in fully supervised manner.
https://doi.org/10.1142/9789811223334_0107
In the modern information era, fall accidents are one of the leading causes of injury, disability and death to elderly individuals. This research focuses on object detection and recognition using deep neural networks, which is applied to the theme of fall detection. We propose a deep learning algorithm with the capability to detect fall accidents based on the state-of-the-art object detector, YOLOv3. Our system is tested on a challenging video database with diverse fall accidents under different scenarios and achieves an overall accuracy rate of 63.33%. The proposed deep network shows great potential to be deployed in real-world scenarios for health monitoring.
https://doi.org/10.1142/9789811223334_0108
The same task goals of biological vision and computer vision impel emerging researches on bio-inspired computational models where contour detection is one of the most prominent areas. However, most contour extraction models arisen by neurobiological findings merely concentrate on the V1 area of cortex, they neglect the anatomical of visual system and the holistic visual pathway. In this paper, we propose a novel contour extraction model, named as Multi-layer Visual Pathway model (MVP), by simulating the cascade of human visual pathway, from retina, lateral geniculate nucleus (LGN) to cortex. We use retina-inspired filter to generate projection of retina and Difference of Gaussian function to produce output of LGN; afterwards, cortex-based model receives inputs from LGN to deal with contours. The MVP model is capable of extracting contours effectively and outperforms CORF model and Canny model in most cases.
https://doi.org/10.1142/9789811223334_0109
The most common malignancies in the world are skin cancers, with melanomas being the most lethal. The emergence of Convolutional Neural Networks (CNNs) has provided a highly compelling method for medical diagnosis. This research therefore conducts transfer learning with grid search based hyper-parameter fine-tuning using six state-of-the-art CNN models for the classification of benign nevus and malignant melanomas, with the models then being exported, implemented, and tested on a proof-of-concept Android application. Evaluated using Dermofit Image Library and PH2 skin lesion data sets, the empirical results indicate that the ResNeXt50 model achieves the highest accuracy rate with fast execution time, and a relatively small model size. It compares favourably with other related methods for melanoma diagnosis reported in the literature.
https://doi.org/10.1142/9789811223334_0110
This research conducts transfer learning with optimal training option identification for the detection of wrist bone abnormalities in X-Ray imagery. Specifically, transfer learning based on Convolutional Neural Networks (CNNs), such as ResNet-18 and GoogLeNet, has been developed for wrist bone abnormality detection. The effect of altering the number of epochs on the network performance using an automatic process is also investigated. The MURA wrist radiological images are extracted in our experiments. The proposed system achieves a superior performance for wrist bone abnormality detection in comparison with those of existing studies.
https://doi.org/10.1142/9789811223334_0111
Personalized tag recommender systems are crucial for collaborative tagging systems. However, traditional personalized tag recommendation models tend to usually vulnerable to adversarial perturbations on their model parameters, which leads to poor generalization performance. In this paper, we propose an adversarial learning based personalized tag recommendation method, which integrates adversarial learning into the classic pairwise interaction tensor factorization model. Specifically, we integrate adversarial perturbations into the embedded representations of users, items and tags, and minimize the objective function of the pairwise interaction tensor factorization model with the perturbed parameters to increase the robustness of underlying factorization model. Experimental results on real world datasets show that our proposed adversarial learning based personalized tag recommendation model outperforms traditional tag recommendation models.
https://doi.org/10.1142/9789811223334_0112
In this article, were explored new possibilities of aggregating information from different channels of color images. This was done by means of giving different importance -threshold- to each channel during the scale phase of edge detection. After that, several methods for aggregating the edges extracted from each channel were applied. The output of the algorithms was compared with Berkeley’s images data set. The results of the experiments proved that using different threshold for each channel and aggregating them makes the edge map closer to the human’s compare to grayscale’s. As well, these results showed that the color space of 8 dimensions -called Super8 and developed in - allows obtaining more significative edges compared to the ones obtained by RGB’s. Moreover, these results point out significative differences in the edges depending from which color/channel they were extracted.
https://doi.org/10.1142/9789811223334_0113
We propose a sound theoretical justification for the widely accepted heuristic of choosing keypoints. We show that if the keypoints are selected as points where a non-local Laplacian reaches its extreme values, then there exists an inverse F-transform operator that computes an approximation of an original image with a suitable (high) quality. Theoretical results are supported by numerical tests and illustrations.
https://doi.org/10.1142/9789811223334_0114
Three techniques of data analysis are examined: dimensionality reduction, locally linear embedding and the F-transform. We show that all of them can be connected using the notion of a fuzzy partition and the corresponding construction of a non-local Laplace operator. The article is endowed with results of two comprehensible numerical experiments that compare achieved outputs. An important conclusion is that the outputs correspond to values of the first degree F-transform components.
https://doi.org/10.1142/9789811223334_0115
Regularization is a principle, concerning a wide range of science domains. Several methods, using this technique, have been proposed. However, there are some limitations to the functionals used in regularization. To remove these, the idea is to employ nonlocal operators on weighted graphs in regularization process. In images, pixels have a specific organization expressed by their spatial connectivity. Therefore, a typical structure used to represent images is a graph. The problem is to choose the correct one, because the topology of graphs can be arbitrary and each type of graph is proper to different type of problem. In this work, we focus on method based on nonlocal Laplace operator, which has become increasingly popular in image processing. Moreover, we introduce the representation of F-transform based Laplace operator.
https://doi.org/10.1142/9789811223334_0116
The exponential H∞ consensus problem is investigated for nonlinear leaderfollowing multi-agent systems (MASs) with time delay via periodic intermittent control. Intermittent control works effectively by limiting communication time, which means one agent is only informed by leader and neighbour nodes at certain times. Sufficient solutions assuring the agreement of all agents are established on the basis of Lyapunov functional method and linear matrix inequalities (LMIs) approach. Finally, a numerical example is given to illustrate the feasibility and effectiveness of the results.
https://doi.org/10.1142/9789811223334_0117
The goal of the work is to present a results of the research done as a part of the project dedicated to elaboration of the model Regional Center for Cybersecurity (RegSOC). The paper presents the main assumptions and the results of the evaluation of a prototype of anomaly detection module within the Regional SOC project. The framework of anomaly detection module has been briefly described and the results of the implemented detection method using neural network has been discussed.
https://doi.org/10.1142/9789811223334_0118
After admitted by the intensive care unit (ICU), a patient may experience mechanical ventilation (MV) if he/she suffers from acute respiratory failure. Vital signs and lab tests associated with the patient are typically recorded in a series over time. We propose an LSTM-based deep relative risk model to quantify patients’ time to occurrence of MV. The internal time-varying covariates motivate us to learn the ratio function via an LSTM net. The number of LSTM cells equals to the width of the sampling window; that is, the i-th cell of the LSTM net takes the patient’s covariates of the time interval i as an input. A subsequent linear layer is used to summarize the hidden layers as the final partial likelihood contribution of each individual. Such an architecture solves the survival analysis problem with internal time-dependent covariates in a nonparametric way. Our experiments based on the MIMIC-III database demonstrate it is a very promising approach to predicting the occurrence of MV.
https://doi.org/10.1142/9789811223334_0119
The graph regularized nonnegative matrix factorization (GNMF) algorithm has received extensive attention in the field of machine learning. GNMF generally uses the square loss method to measure the quality of reconstructed data. However, noise is introduced when high-dimensional data is mapped to low-dimensional space, which leads to a decrease in model clustering accuracy since the square loss method is sensitive to noise. To solve this issue, this paper proposes a novel graph regularized sparse NMF (GSNMF) algorithm. For obtaining cleaner data matrices to approximate the high-dimensional matrix, the l1-norm on the reconstructed low-dimensional matrix is added to achieve the adjustment of the data eigenvalues in the matrices and the sparse constraints of the objective function. To address the optimization process of our algorithm, the corresponding reasoning is given with an iterative updating algorithm. Experimental results on 8 datasets have shown that the proposed algorithm has a superior performance.
https://doi.org/10.1142/9789811223334_0120
Deep semi-supervised learning has been widely implemented in the real-world due to the rapid development of deep learning. Recently, attention has shifted to the approaches such as Mean-Teacher to penalize the inconsistency between two perturbed input sets. Although these methods may achieve positive results, they ignore the relationship information between data instances. To solve this problem, we propose a novel method named Metric Learning by Similarity Network (MLSN), which aims to learn a distance metric adaptively on different domains. By co-training with the classification network, similarity network can learn more information about pairwise relationships and performs better on some empirical tasks than state-of-art methods.
https://doi.org/10.1142/9789811223334_0121
Automobile insurance fraud detection has become critically important for reducing the costs of insurance companies. This survey paper categorises, compares, and summarises from almost all published data mining-based technical and review articles in automated automobile insurance fraud detection. Compared to all related reviews on fraud detection, this survey not only puts a focus on the automobile-related only hence more subject-oriented but also lists relevant data processing techniques and publicly available real data to perform experiments on.
https://doi.org/10.1142/9789811223334_0122
This paper describes a robust convolutional neural network deep-learning architecture involving a multi-layer feature extraction for classification of house types. Previous studies regarding this type of classification show that this type of classification is not simple and most classifier models from the literature have shown a relatively low performance. For finding a suitable model, several similar classification models based on convolutional neural network have been explored. We have found out that adding better and more complex features do results in a significant accuracy related performance improvement. Therefore, a new model taking this finding into consideration has been developed, tested and validated. For training, testing and verification purposes of the developed model, various house images extracted from the Internet have been used. The test results clearly demonstrate and validate the effectiveness of the developed deep-learning model.
https://doi.org/10.1142/9789811223334_0123
Various studies have shown that convolutional neural networks (CNNs) can be successfully applied to classify documents types (by processing related document-images) from related document-images. Generally, document classes are categorized/differentiated through a similarity or not of their related respective structures. Although many scientific works have been published in this area of research, most of them do not reach the accuracy level required for most practical application scenarios (e.g., see digital office). This paper presents a new neural model based on convolutional neural networks for automatically and reliably detecting/classifying complex document types. A comprehensive benchmarking of our novel model with various other well-known CNN based classifiers clearly demonstrates that our model significantly outperforms all those other models by reaching an accuracy performance of 94.3%.
https://doi.org/10.1142/9789811223334_0124
In this paper, we do develop and test/validate a Convolutional Neural Network (CNN) model for image quality enhancement of document-images which are seriously distorted by blur. For many document-image processing systems such OCR (optical character recognition) and/or documents’ classification, the quality of the document-image has a significant impact on the respective performance, i.e. the OCR character detection performance and/or document’s classification accuracy/precision. Our results do demonstrate that the blurred document-images which were not previously recognizable by an OCR system can be now recognized with 95% of accuracy/precision after they have been enhanced with our developed CNN model. The patch-based processing enables the possibility of using much lower amount of memory despite the possibly huge size of any input images. Therefore, this model has a very good portability and can be implemented even on low-power and low-memory (even embedded/portable) computing units.
https://doi.org/10.1142/9789811223334_0125
A huge amount of high resolution satellite images are used in different fields such as environmental observation, climate forecasting, urban planning, public services, and precision agriculture. Especially, remote sensing techniques are important for land-use monitoring which is the most important task regarding the effective management of agricultural activities. And traditional object detection and classification algorithms are inaccurate, time wasting and unreliable to solve the problem and also many researchers discuss and introduce the current domain but still results are not good enough. Hence, this paper focuses on deep learning based approaches for object detection and classification of satellite images. This paper is devoted to implementation and effective training of U-Net fully convolutional neural network architecture for semantic segmentation of satellite imagery.
https://doi.org/10.1142/9789811223334_0126
The detection and classification of multiple objects (which may be several small documents within a single bigger document-image), particularly in presence of a poor dataset (poor w.r.t. low number of training samples and class imbalance), is a challenging task due to the potential overfitting during the training process. Additionally, the distortions which contaminate document images, such as noise and contrast variations, can further challenge the detection’s quality, especially when there are not enough data samples available for some classes, this resulting is strong class imbalance. The dataset used in this research consists of scanned (document-)images with, for each of them, single or multiple documents (e.g. passport, driver’s license, etc.) present within a single document-image page. A multi-step transfer learning technique is introduced and used in this paper to address the multiple document-detection problem under the hard so defined conditions. The main concept of this technique is based on constructing a “bridging domain” between the source and the target domains. A combination of the Faster-RCNN and the ResNet50 models is used to implement four different transfer learning methods. With our developed methods, we have achieved an overall performance of respectively 93% “object classification” accuracy and 98% “object detection” accuracy, this, furthers, along with a significant tolerance towards unseen examples.
https://doi.org/10.1142/9789811223334_0127
In this paper we comprehensively investigate and discuss the most important “data quality” management related core issues. Specifically, the following issues are addressed: (a) What are the various realistic imperfections and/or sick-nesses which can affect data and their respective origin? (b) Furthers, what are the appropriate diagnostic concepts (i.e. detection algorithms/schemes) w.r.t. each of the imperfections/sicknesses? (c) In addition, what are the respective machine-learning and deep learning based healing/reparation/mitigation concepts? Finally, for illustrative purposes, the effect of the class imbalance sickness is taken as a case study which is closely analyzed/studied in the context of a special dataset prepared for the case: here one just considers the comprehensive analysis of the class imbalance impact on the performance of two selected classifiers with two different architectures.
https://doi.org/10.1142/9789811223334_0128
Deep learning based single image super resolution has been researched for a while now. However, the ability to construct a higher resolution image with better structural integrity at the edges is yet to be reached for a given low resolution image. This inability occurs mainly due to treating the texture and edges alike at the design and training phase of the neural networks. This paper tries to address these issues by taking both the textural features and the edge features into consideration at both the neural network design level and training level. These considerations not only achieved qualitatively better high resolution images, but the neural network model involved is also lighter in terms of number of parameters and multiply and accumulation operations when compared competing state of the art deep learning based super resolution approaches.
https://doi.org/10.1142/9789811223334_0129
In this paper we present a sitting posture classification system which uses simple sensors mounted under the legs of a chair. Various classification methods have been used, out of which an Artificial Neural Network yielded the best results.We show that this nonintrusive system with a simple design is able to achieve a accuracy of 94% for 8 subjects and 8 classes, when the classification was done with familiar users, and 72% when the classification was done with unfamilar users.
https://doi.org/10.1142/9789811223334_0130
Encoding the visual perception information in the brain is a key problem, as well as how to communicate objective properties of the world to the brain, and how to recognize the objective properties through connections of brain neurons. In this paper, we proposed a retinal coding-based image recognition method, RNCIR for short. We designed a new convolutional spiking neural network model based on retinal coding mechanism, and a temporal encoding scheme by using the unsupervised spike timing dependent plasticity (STDP), and then classified the trained objects by using support vector machine (SVM). At the same time, we tested the proposed method in two datasets: (1) in the public MNIST database, the recognition accuracy of RNCIR is 98.42%; (2) in the classical image recognition Caltech (face/motorbike) datasets, the recognition accuracy of RNCIR is 96.83%. Experimental results show that our approach was able to recognize images from large datasets accurately and efficiently.
https://doi.org/10.1142/9789811223334_0131
Low accuracy, poor real-time performance, and low efficiency of manual textitle defect detection are problems lying in textile manufacturing industry, an integrated fabric defect detection algorithm based on deep convolutional neural networks was proposed. First of all, considering that the fabric defect sample set rarely overfits, the original sample is preprocessed, and the data is enhanced by random translation, rotation and noise addition operations, and divided into a training set and a validation set at a ratio of 10: 1. Then train the trained model on fabric defect samples, retaining the parameters of all convolutional layers of the pre-trained model. Secondly, because the amount of parameters of the fully connected layer affects the overall training efficiency, it is proposed to replace the fully connected layer with a pooling operation that combines the global maximum pooling layer and the global average pooling layer. Finally, support vector machines (SVM) and SoftMAX multi-class classifier to recognize it. and finally integrate the two classification results as the final recognition result. The experimental results show that the algorithm in this paper can effectively improve the accuracy of fabric defect recognition, and the accuracy rate can reach 93.7%.
https://doi.org/10.1142/9789811223334_0132
A real-time path planning method is proposed for indoor mobile robots based on the theory of differential neighborhood on the assumption that the global path is known. Firstly, to overcome dynamic obstacles in the mobile robot’s active environment, some differential neighborhoods are obtained by the increment of neighborhood boundary curve. Secondly, by comparing the width of differential neighborhood with the width of mobile robot, a differential feasible neighborhood is selected. Thirdly, the definitions of single objective fuzzy superiority degree and comprehensive satisfaction degree are given based on the idea of efficiency coefficient method and fuzzy superiority set. The comprehensive satisfaction degree is used as the evaluation criterion for selecting the satisfied trapezoidal feasible neighborhood. Finally, the effectiveness of this method is verified by Matlab simulation experiments on the mobile robot’s motion in a Lab scenario. Compared with other methods, this method can guarantee the safety of mobile robots, and is more practical since the real size of robot is considered.
https://doi.org/10.1142/9789811223334_0133
The fault or mechanical flaw causes several feeble fluctuations to the position signal. The identification of these oscillations by the encoders may help to determine the performance of the machine and the health conditions. In operations, the trend is usually several orders higher than the interested magnitude fluctuations, making it hard to identify feeble swings without signal deformity. Besides, the swings can be intricate, and the amplitude can be changed under a non-stationary operating condition. In order to overcome this problem, the singular spectrum analysis (SSA) is suggested to detect the feeble position oscillations of the rotary encoder signal in this article. It allows the complex signal of the encoder to be reduced to a variety of explainable noise-containing components, a collection of periodic oscillations and a trend. The numerical emulation reveals the achievement of the technique; it demonstrates that the SSA is superior to the empirical mode decomposition (EMD) in terms of accuracy and ability. In addition, rotary encoder signals from the robot arm are evaluated to identify the causes of oscillation at junctions during industrial robot movements. The proposed route for the robotic arm is proven, feasible and reliable.
https://doi.org/10.1142/9789811223334_0134
Traditional paying methods require the execution of payments before services, which may not satisfy the needs of modern consumers anymore. In this paper, a novel service payment system (SPS) is proposed for supporting different kinds of customers, including people, machines or devices, where paying for a service will not take place until its completion. The SPS operates by using smart contracts, which are self-executing programs fulfilling the underlying contract terms between a buyer and a seller without involving any third party. The execution of smart contracts takes place on a blockchain platform, using both in- and outbound events. In addition, the use of Internet of Things (IoT) together with the blockchain technologies allows us to do interactive data exchange through sensors without human interventions. Benefiting from the strengths of cryptography and hash functions, the SPS can hence meet the requirements of the consumers, resulting in an improved payment system. The book of a train ticket is taken as an example to demonstrate that the combination of smart contracts, a blockchain platform and cryptography will create an upgraded payment system capable of serving and satisfying the mindset and requirements of current and future customers in the Big and Fast Data era.
https://doi.org/10.1142/9789811223334_0135
Real world traffic data sets are almost always accompanied by missing values due to various uncertainties, which to a great extent restrict researchers from performing classical transportation analyses. To solve this pervasive problem, a number of alternative methods have been developed during the last decades. In this paper, we provide an overview of some widely used imputation methods, and classify them into three categories, i.e., principle-based approaches, prediction-based approaches, and pattern-based approaches. We aim to familiarize researchers, especially those who are performing transportation analyses, with strengths and limitations of all these possible solutions, provide them with some recent developments in this rapidly changing field, and give guidance on how to select such approaches in practice.
https://doi.org/10.1142/9789811223334_0136
In-vehicle human object identification plays an important role in vision-based automated vehicle driving systems while objects such as pedestrians and vehicles on roads or streets are the primary targets to protect from driverless vehicles. A challenge is the difficulty to detect objects in moving under the wild conditions, while illumination and image quality could drastically vary. In this work, to address this challenge, we exploit Deep Convolutional Generative Adversarial Networks (DCGANs) with Single Shot Detector (SSD) to handle with the wild conditions. In our work, a GAN was trained with low-quality images to handle with the challenges arising from the wild conditions in smart cities, while a cascaded SSD is employed as the object detector to perform with the GAN. We used tested our approach under wild conditions using taxi driver videos on London street in both daylight and night times, and the tests from in-vehicle videos demonstrate that this strategy can drastically achieve a better detection rate under the wild conditions.
https://doi.org/10.1142/9789811223334_0137
In recent years, a novel recycling system of the spacecraft, which includes a parafoil and a mobile recycling platform, has attracted the attention from the researches. However, comparing with the traditional parafoil recycling system with a constant landing position, this system puts higher requirement on the cooperation of these two systems for achieving a more accurate and safe landing. In this paper, after introducing this novel system, a trajectory optimization method based on Gauss pseudo-spectral method to deal with the consistency and the accurate docking. The detailed results prove the effectiveness and feasibility of the proposed method.
https://doi.org/10.1142/9789811223334_0138
This paper deals with a variant of the integrated production and outbound distribution scheduling (IPODS) problem by considering multiple heterogenous capacitated vehicles. The problem reflects a real world applications since both production and distribution stages. In production phase orders are produced according to permutation flow shop system. In distribution phase multiple heterogenous capacitated vehicles serve customers and each vehicle can be used more than once. Objective is to determine the integrated schedule which has the minimum summation of total tour time and tardiness. So IPODS problem covers two NP-hard problem which are called as machine scheduling and vehicle routing. Therefore we can say that the integrated problem is also NP-hard. We propose a new mathematical model for integrated problem and evaluate the performance of the model on randomly generated test instances. Comparative results show that CPLEX is able to find optimal solutions for only small sized instances and the performance of the model is not satisfying for operational level scheduling decisions.
https://doi.org/10.1142/9789811223334_0139
Based on the fuzzy mean-risk-skewness framework, a portfolio selection model is proposed by using Analytic Hierarchy Process (AHP) associated with information granules. In the proposed portfolio selection model, the return of portfolio is quantified by the fuzzy expected value and skewness, and the risk of portfolio is quantified by the fuzzy variance and entropy. This model uses the expert system for reasoning combination instead of convex optimization (or non-convex optimization) to get the optimal portfolio investment proportion, so it has higher decision efficiency and lower decision cost. The experiment results show the feasibility and effectiveness of the proposed model.
https://doi.org/10.1142/9789811223334_0140
The study is concerned with the problem of online planning low cost cooperative paths for Unmanned Surface Vehicles (USVs) based on the distributed consensus of information and the artificial vector field. A reference path is planned off-line as reference to maximize the known environmental information. An information consensus scheme is employed to quickly calculate cooperative paths for a fleet of USVs to follow the reference path. An artificial vector field is constructed by following the global optimally path and the current, and the vector field is used as heuristics for the optimal Rapidly-exploring Random Tree (RRT*) based planner to help RRT* quickly plan low cost paths. Simulation results show that our online cooperative paths planning method is performed well.
https://doi.org/10.1142/9789811223334_0141
For reducing image blurring problems, a generative adversarial constraint loss (GACL) function in Generative Adversarial Network is proposed. The hinge loss function and the adversarial loss function are combined in the GACL function, which made the trained generative model stable, and the image blurring will be reduced. Under the experiment in open source image dataset MNIST, CIFAR10/100 and CELEBA, using GACL function as a constraint of the generative adversarial network, the effect of image deblurring is obviously improved in the structural similarity measure and visual appearance.
https://doi.org/10.1142/9789811223334_0142
Image recognition with Neural Networks (NN) is mostly done with Convolutional Neural Networks (CNN). As an alternative to CNN, neural networks in the frequency domain can also be used. In this case, the images are transformed to the frequency domain as pre-processing, and feature representation can be done with the help of Discrete Cosinus Transform (DCT). In this work, we have made an investigation on traffic sign recognition using NN in the frequency domain. We have used different measurement metrics for the study of data similarities across different layers of NN. Based on the computed similarity measurements within individual classes and among different classes, we have demonstrated the influence of the underlying data representation on the recognition performance of the neural network.
https://doi.org/10.1142/9789811223334_0143
Based on the standard particle swarm optimization algorithm (SPSO), an improved particle swarm optimization algorithm, adaptive learning factor chaotic master-slave particle swarm optimization algorithm (ACCMSPSO), is put forward, into which the concept of adaptive learning factor and master-slave particle swarm is introduced. In the improved algorithm, the learning factor of each particle is different and changes dynamically according to its own fitness. Once the master particle swarm has evolved some generations, a slave particle swarm will be produced which initial particles are generated from the global optimal particle of the master one in a chaos way. Simulation results show that the improved algorithm can improve the global search capability, convergence speed and robustness, and the performance of the improved algorithm is the best in all the algorithms involved in the experiment.
https://doi.org/10.1142/9789811223334_0144
Chaos synchronization of the master-slave generalized Lorenz systems via variable substitution control is studied, deriving some criteria for global chaos synchronization of the master-slave generalized Lorenz systems under a single-variable substitution control. These criteria are then applied to the classical Lorenz systems, Chen systems, and Lü systems, obtaining some new results.
https://doi.org/10.1142/9789811223334_0145
Intelligent electronic components’ management systems (IECMS), which can automatically inquire about components’ stock and make inventories, were often proposed in recent years to improve the efficiency of material management. So far, research focused on system architecture and information management software. The design of components’ monitoring subsystems was neglected in practical implementations of IECMS for industrial environments. The existing schemes share the disadvantages of high complexity and cost. As IECMS are not widely applied, yet, to implementations of IECMS features of hardware customization, maintainability, real-time components’ monitoring and cloudbased stock tracking are added to provide an intelligent and economic solution for advanced material management applications. Preliminary test results prove its effectiveness in intelligent components management.
https://doi.org/10.1142/9789811223334_0146
Based on chaos control, coverage path planning of floor cleaning robots is discussed in this work. As one approach to solve the path planning that covers the required floor, randomization algorithm has been reported to be less cost and lower efficiency compared to sensor-based coverage algorithm. Chaos is ergodic and pseudo-randomized, but it has some properties such as boundness and unpredictability. Applying chaos algorithm is a potential way to optimize and improve coverage path planning. Based on the duffing map, chaos algorithm is explored and verified to realize the path planning of sweeping robots.
https://doi.org/10.1142/9789811223334_0147
In order to simplify the control system of an asymmetrical parallel mechanism for servo mechanical presses, a segmented synchronous control scheme is proposed. According to the stamping motion feature, the slider motion is divided into three stages. For each stage, different synchronous control method, such as parallel synchronous control or master-slave synchronous control, is adopted. To verify its validity, an experiment prototype with IPC as controller is built. And the kinematic and dynamic characteristics are tested. The results show that the simplified synchronous control scheme can realize the required slider motions with simpler control strategy and more convenient trajectory planning.
https://doi.org/10.1142/9789811223334_0148
The authors have developed a new method (mechanism) for transforming diagrammatic models in the basis of graphic languages. This method takes into account the syntax (topology), denotative and significative semantics of the transformation. Thanks to the method the execution time of design workflows during the CAD systems design is reduced, and the workflows quality is also improved.
https://doi.org/10.1142/9789811223334_0149
This paper uses a quantitative modeling method to quantify the fast-scale instability and chaos existing in parallel Buck converters. The method uses the mode switching time as the sampling point and combines the permutation entropy to quantify the mode switching time sequence (MSTS). Compared with the traditional bifurcation diagram and the largest Lyapunov exponent map, it can accurately and quickly discriminate the behaviors of period-doubling bifurcation and border collision bifurcation in the system.
https://doi.org/10.1142/9789811223334_0150
This paper introduces a fractional-order flyback converter with a fractional-order transformer and a fractional-order capacitor. Firstly, the mathematical model and the state-space average model of the converter in continuous conduction mode (CCM) are established. Then the quiescent operation point is deduced by direct current (DC) analysis under Caputo definition of fractional derivative. Furthermore the effects of the order on the ripple and the CCM operating condition are discussed. Finally, the circuit simulation in the PSIM and the numerical calculation of the mathematical model are carried out to verify the correctness of theoretical analysis and the validity of fractional-order model.
https://doi.org/10.1142/9789811223334_0151
This paper aims to apply the sneak circuit analysis approach to find the sneak circuit paths of Boost ZVT PWM converter, and then to improve the topology to realize advanced features. In detail, the switching Boolean matrix is used to analyze the operating mode of each phase of the circuit. According to the results and the conditions of the sneak circuit, an improved method is used to eliminate the path of the sneak circuit. Finally, simulations were used to prove the correctness of the theoretical analysis.
https://doi.org/10.1142/9789811223334_0152
Temperature control of the cooling tower (CT) system based on fuzzyprogrammable logic controller (PLC) control technology is developed and implemented in this research to increase electric static precipitator (ESP) performance, output, and quality and minimize losses within the CT in the cement plant. This approach exploits the fuzzy logic controller and implements it in the PLC. The valve system is controlled to avoid the occasional wet bottom phenomenon in the CT. A fuzzy valve controller is designed to solve the problem of opening and closing the valve at the appropriate time in the return line to regulate the temperature of the CT. The program is written using MATLAB and converted to structured text language (STL) by SIMULINK PLC coder. STL is a functional expression that increases the efficiency of the work system PLC − 1200. Results show a substantial increase in ESP quality after the implementation of fuzzy control temperature for the CT to reduce dust emissions from the cement plant compared with the traditional operator.
https://doi.org/10.1142/9789811223334_0153
This study aims to present a survey of sneak circuit analysis methods for resonant switched capacitor converters. Among these various sneak circuit path analysis methods, most of them are based on connection matrix, adjacency matrix and switching Boolean matrix. In order to well introduce these methods, a three-order step-up resonant switched capacitor converter is utilized to elaborate these methods, then we compare them to reveal the merit and demerit. Based on the state-of-the-art of these methods, some critical conclusions are made to well use them and further improved.
https://doi.org/10.1142/9789811223334_0154
An improved discrete mapping model of DC-DC converter is introduced in this paper. The model does not depend on the switching frequency of the converter. The frequency of the model is convertible, which is the foundation of modeling discrete mapping model in cascaded converters systems with different frequency. This paper takes the peak current-mode controlled Boost converter as an example to establish the improved discrete mapping model. The simulation results verified the correctness of the new model.
https://doi.org/10.1142/9789811223334_0155
This paper proposes a ring network of memristive electromagnetic induction coupled Hindmarsh-Rose neuron network. By adjusting its two coupling strength on both sides, various spatiotemporal patterns for depicting oscillation death state, chimera state and traveling chimera state are obtained. To quantitatively feature the collective behaviors, a statistical measure method of strength of incoherent is employed to characterize the spatiotemporal patterns. Besides, two-parameters incoherent map is plotted for analyzing the comprehensive cluster behaviors.
https://doi.org/10.1142/9789811223334_0156
Output-series photovoltaic system is extraordinary in power density, efficiency and cost, but is difficult to realize a distributed maximum power point tracking (DMPPT) by a high efficiency module integrated topology. This is because those modules should meanwhile have current step-up and step-down abilities. This paper introduces a novel module integrated converter named switched boost module integrated converter (SBMIC) with high efficiency and high flexibility. Further its small signal model with parasitic resistance and PI control loop are proposed. Simulation shows the expected static performance of closed loop system, and acceptable dynamic performance while tracking maximum power point.
https://doi.org/10.1142/9789811223334_0157
Sneak circuits in DCM Buck converter with parasitic parameters are investigated in this paper. Comparing with the DCM Buck converter without parasitic parameters, the operating modes increase from three to five in a switching period, which two additional modes are sneak circuits, and they appear under certain conditions. Operational conditions of the sneak circuits are conducted according to state equations of modes. Compared with the DCM Buck converter without parasitic parameters, the output performance of the converter is variated. The theoretical analyses are verified by the simulation results and experimental results.
https://doi.org/10.1142/9789811223334_0158
In recent years, with the continuous development and application of clean energy, DC/DC converters with high voltage gain have received more and more attention. Traditional DC/DC converters have the disadvantages of small voltage gain and large inductor current ripple. In order to improve the voltage gain and reduce the inductor current ripple, this paper proposes a high voltage gain Zeta converter based on coupled inductors. This article combines two traditional Zeta converters, and then uses a coupled switched inductor structure to replace the energy storage inductors so that the voltage gain is increased by 2 (1 + D) times. This paper analyzed the operating modes of the converter in detail and carried out simulation verification. The psim simulation results showed that the design indeed reduced the inductor current ripple, and achieved high voltage gain.
https://doi.org/10.1142/9789811223334_0159
Electric spring (ES) is an emerging power quality adjustment method that can effectively solve the power quality problem, especially voltage fluctuations caused by renewable energy sources. However, the existing topologies of ES have the shortcoming of limited compensation range due to their structure, in which ES is in series with non-critical load (NCL). In this paper, a novel ES is proposed based on a passively damped LCL filter. Unlike the existing ES topologies, LCL-ES employs NCL to implement passive damping of LCL filter, which not only overcomes the passive damping but also extends the compensation range. Besides, the key points on topology design and control strategy are discussed. Finally, the effectiveness of the proposed LCL-ES has been verified via simulation.
https://doi.org/10.1142/9789811223334_0160
A simple diode-based circuit, which consists of two single-phase diode rectifiers, and can be considered as an equivalent implementation circuit of a memristor, is constructed in this paper, According to the theoretical analysis, the input port admittance of the proposed circuit conforms to the definition of a flux-controlled memristive system. Simulations and experimental results further show that, the proposed circuit has a volt-ampere characteristic curve at the input port with an oblique “8” shape, which is unique to the memristor. Unlike those memristor equivalent circuits based on integrated operational amplifiers that handle electric signals with relatively small power, the proposed circuit can process electrical signals with larger power level in principle, since it is composed only of components such as resistors, capacitors and diodes. In addition, fewer devices are used, and the structure is simple and easy to be realized.
https://doi.org/10.1142/9789811223334_0161
Due to climate change, energy crisis and environmental concern, microgrid including distributed generations become popular in the electric energy industry, since the microgrid can improve the benefits of energy while providing reliable power supply. In order to further maximize economic benefits by improving the utilization rate of renewable energy, this paper builds an optimal dispatching model to optimize grid configuration, in which the electricity price is taken as the main reference element. The effectiveness of particle swarm optimization is illustrated with the power curves of optimal economic operation of microgrid.
https://doi.org/10.1142/9789811223334_0162
For the power converter with multiple switches, different control methods can be obtained by combining the ON/OFF states of the switches. In this paper, a high voltage gain quasi-switched boost inverter (qSBI) is taken as an example to illustrate how to obtain all possible control strategies by applying the multi-modal combination control method. Then, one novel three-modal control method among all feasible control strategies is selected to compared with the existing control methods in detail. The simulation results verifies the advantages of the proposed three-modal control method, and proves the feasibility of the multi-modal combination control method.
https://doi.org/10.1142/9789811223334_0163
An advanced isolated Ćuk converter suitable for wide input voltage range is proposed in this paper for applications of renewable energy generation. It has the advantage of stepping up or down the intermittent input voltage to a desired one via adjusting a relatively lower duty cycle. A detailed operational analysis of the proposed converter is conducted to well understand it. Experiments results well agree with the theoretical analyses.
https://doi.org/10.1142/9789811223334_0164
This paper studies the loss and temperature distribution of the bridge arm reactor in modular multilevel converter (MMC) during the operation. Based on COMSOL Multiphysics, a three-dimensional simulation model of multi-core parallel structure is built. Then, the core loss and temperature distribution of the bridge arm electric reactor are obtained under the excitation of large current and high frequency square wave voltage. The theory and simulation results show that the loss value at the core corner is large, and the temperature rise of the winding insulation is high, up to 90 °C. The results can be used as a reference for aging lifetime evaluation and designing heat dissipation.
https://doi.org/10.1142/9789811223334_0165
DC-DC converters play more and more important roles in power conversion fields. In order to meet the various demands of power electronic loads, this paper proposes a programmable topology deduction algorithm for DC-DC converters based on graph theory for S1D2C1L1-type DC-DC converters. In this paper, the proposed algorithm is used to discover all feasible S1D2C1L1-type DC-DC converters with the aim to overcome the randomness and uncertainty in topology derivation process. Further, the computerassisted algorithm is used to take the place of manual derivation to obviously reduce the design time cost. Finally, based on the proposed algorithm, 9 novel topologies are found in this paper, which verify the practicability and correctness of the proposed algorithm.
https://doi.org/10.1142/9789811223334_0166
In this paper, an analytical tuning method of the fractional order PIλ controller for feedback control system is proposed, it can make the closed loop system with PIλ controller be equivalent to an expected model, whose transfer function satisfy the Bode’s ideal function, and make the closed loop system have good gain robustness. Then, the Taylor Maclaurin series expansion is applied to the parameters tuning of the PIλ controller, the PIλ controller parameters are derived analytically in this paper. Finally, Buck converter with PIλ controller is used as an application example, the simulation results verify the correctness and effectiveness of the proposed analytical parameter tuning method.
https://doi.org/10.1142/9789811223334_0167
In this paper, an analysis method of exponential stability of switching converters based on discontinuous control is proposed. This method can determine the exponential stability of switching converters without the approximating model of switching converters and the exponential convergence rate is calculated. On the basis of judging the exponential stability of the switching converter, a discontinuous controller of the current continuous mode of Boost converter is designed. The controller adopts the discontinuous feedback control method, which is more consistent with the switching discontinuous characteristics of the converter than the continuous feedback control method. The simulation results show the superiority to the control method, and the experiment circuits proves the effectiveness of the control method.
https://doi.org/10.1142/9789811223334_0168
This paper proposes a new modeling method for Buck three-level (TL) converter. Based on the equivalent translation principle, the operating conditions of the Buck TL converter in different modes can be transformed into linear inequalities. Then, a new model of Buck TL converter containing matrix inequality is obtained. Compared with other Buck TL converter models, the proposed model of Buck TL converter is more concise. Finally, the proposed model is simulated in MATLAB and its effectiveness is verified by simulation and experimental results. Moreover, the experimental results are consistent with the simulation results.
https://doi.org/10.1142/9789811223334_0169
For continuous-time switched linear systems, it is well known that the spectral abscissa is equal to the least common matrix set measure of the subsystems. Base on this equivalence, in this paper, a computational procedure is proposed to obtain the least μ1 measure of the switched linear system via coordinate transformations of type 3, which can be used to approximate the spectral abscissa. Furthermore, a stopping criterion of Algorithm 1 is given. An illustrative example is carried out to exhibit the effectiveness of the proposed method.
https://doi.org/10.1142/9789811223334_0170
With the improvement of human living condition, how to obtain a favorable thermal comfort indoor based on big data is a promising research field in city computing. The study examined the total 12 factors which have effect on indoor thermal comfort and developed a simulation model of a high-speed railway station in Chengdu with Energy Plus software. Deep Neural Network model (DNN) is proposed to examine the relationship between selected factors and thermal comfort. To investigate the performance of proposed DNN, Linear Regression (LR), Support Vector Machine (SVM) and Decision Tree (DT) are also developed. The result indicated that DNN performs best in terms of RMSE and R2 among three models.
https://doi.org/10.1142/9789811223334_0171
Urban computing could create win-win-win solutions to big issues, such as energy consumption, faced by cities. As a huge source of energy consumption, air conditioning has been drawn much attention recently. The temperature control of air conditioning based on big data for reducing energy consumption is a promising research area in urban computing. In this paper, an online learning framework for temperature control of air conditioning is proposed. Then, k-nearest neighbor (KNN), neural network (NN) and support vector machine (SVM) are embedded into this framework to obtain the favorable air condition controlling policy. Extensive computational experiments are conducted. The results show that NN outperforms KNN and SVM for indoor temperature control.
https://doi.org/10.1142/9789811223334_0172
Intelligent manufacturing has become the main strategy of many countries, which has rapidly developed along with some flourish enabling technologies such as advanced communication, big data analysis and internet of things. Meantime, mutual promotion of research and application has changed the context of intelligent manufacturing. This work explores to intelligentize production line based on industrial internet with the fifth generation communication, which is applied and verified in an enterprise.
https://doi.org/10.1142/9789811223334_0173
With the introduction of digital media and the associated increasingly rapid spread of the digitization process, the need for automatic methods for the evaluation of learning performance is also growing. Innovative approaches for automatic evaluations are particularly important in the engineering courses of university teaching, where students have to learn and practice programming techniques. In this work, we have proposed and analyzed a few general evaluation criteria which are important for the automatic evaluation of programming tasks in university teaching. We have considered both the hard and the soft criteria, with each targeted at the beginners and advanced learners respectively. Based on these criteria, we have proposed an approach for the automatic evaluation of programming performance and demonstrated how it can be integrated into an online learning management system.
https://doi.org/10.1142/9789811223334_0174
Sustainable health tourism is designed to meet the economic, social and aesthetic needs of local people and tourists in the visited region. In recent years, developing countries such as Turkey have accelerated their investments in this field. As the sustainable tourism gains importance, site selection plays a major role in the field of sustainable health tourism. In this paper, fuzzy linguistic Prolog is used to match health tourism activities with suitable regions. This paper aims to find investable regions and health tourism activities according to given sustainable criteria for investors by using Bousi~Prolog. It is a fuzzy linguistic Prolog that enables working with both fuzzy linguistic and linguistic tools to guide the Prolog systems towards computing with paradigm phrases that can be very helpful to the linguistic resources. Logic programming allows decision makers to make different consistent decision, considering the correlations, while giving importance to the criteria that are positively and negatively related to each other.
https://doi.org/10.1142/9789811223334_0175
Industry 4.0, indicated as the “Fourth Industrial Revolution”, also pointed out as “integrated industry”, “smart manufacturing” and “industrial internet of things”, has taken a great attention in terms of the potential to reflect entire industries’ actions by transforming the production to a fully automated and self-coordinated digital system. This transformation process necessitates the significant amount of investments and resources to be additionally, adaptation to the current operational technologies to new initiatives could be problematic. In this study we focus on Industry 4.0 project prioritization by using Spherical Fuzzy Analytic Hierarchy Process. In the application part, we prioritize five projects according to four main project selection criteria.
https://doi.org/10.1142/9789811223334_0176
The existing methods based on coupled mapping for low-resolution face recognition (LRFR) only map images with different dimensions to the same dimension, the mapping process and the mapped images have no clear physical meaning. In the human mind, the whole could be regarded as a combination of its different local features. Therefore, face images can also be regarded as the composition of different local features. For the face images of the same target with different resolutions, their local features are different in scale, but the way of forming the whole by local features is consistent. Based on this idea, a novel coupled non-negative matrix factorization algorithm (CNMF) algorithm is proposed to deal with the LRFR problem. In the learning process of the proposed method, the high- and low-resolution images are expressed as linear combination of local features respectively. The representation coefficients of different resolution images of the same target are kept coupled to obtain the respective basis matrix. The proposed CNMF is more interpretable in extracting common features of different dimension data. The experimental results show that the proposed coupled non-negative matrix factorization method is superior to the other state-of-the-art low-resolution image recognition methods.
https://doi.org/10.1142/9789811223334_0177
With the mutual promotion and development of computing technology, communication technology and control technology, building control systems by multiple machines has become a trend. The reliability of communication mechanism in these systems is the basic guarantee of system effectiveness. In this paper, a communication mechanism suitable for centralized and distributed control systems is proposed, which attempts to provide an effective framework for the design of communication subsystems in the system. The mechanism includes the logical structure of the communication network, RPC of Client-Server used in a control system, protocol and its basic functions. The basic function of the protocol is concretization and leaves a large expansion space, so this mechanism can be directly applied to the engineering design of medium and small control system, and has good flexibility and scalability.
https://doi.org/10.1142/9789811223334_0178
SAT problem solving plays an important role in the field of industrial applications. Most of the problems of SAT competitions come from industry. However, the problems of industrial applications are large in scale and it takes a long time to deal with them. This paper introduces the research progress of SAT problem solving algorithm based on hardware acceleration. Secondly, a GPU based Boolean Constraint Propagation algorithm is implemented based on the 3-SAT-DC algorithm, and a clause-literal association structure for GPU is designed. CUDA programming model is used to design and implement the algorithm. Experimental results show that the new 3-SAT-DC algorithm is better than the original algorithm.
https://doi.org/10.1142/9789811223334_0179
The quality of respiratory function determines the recovery and survival rate of the patients with cervical spinal cord injury. A cost efficient method of evaluating the respiratory function is to assess the strength of their cough sounds. However, some patients with cervical spinal cord injury fail to develop an effective cough because of pain or nerve damage, and their voice is shout, called pseudo-cough herein, rather than cough. Such samples of pseudo-cough sounds should be weeded out so as to avoid wrong evaluation of respiratory function. In this paper, a linear classifier is proposed to recognize pseudo-cough sounds from cough sounds for patients with cervical spinal cord injury. To alleviate dependence on the number of cough-sound and pseudo-cough-sound samples, a light-weight classifier is constructed by using merely two features: zero-crossing rate and maximal autocorrelation coefficient, and the classifier is trained mainly with unvoiced and voiced sounds rather than cough sounds and pseudo-cough sounds. Experimental results showed that the sensitivity and specificity were 98% and 86.4%, respectively.
https://doi.org/10.1142/9789811223334_0180
While consumers value a free and easy return process, the costs to e-tailers associated with returns are substantial and increasing. Consequently, merchants are now tempted to implement stricter policies, but must balance this against the risk of losing valuable customers. With this in mind, data-driven and algorithmic approaches have been introduced to predict if a certain order is likely to result in a return. In this application paper, a novel approach, combining information about the customer and the order, is suggested and evaluated on a real-world data set from a Swedish e-tailer in men’s fashion. The results show that while the predictive accuracy is rather low, a system utilizing the suggested approach could still be useful. Specifically, it is reasonable to assume that an e-tailer would only act on predicted returns where the confidence is very high, e.g., the top 1–5%. For such predictions, the obtained precision is 0.918–0.969, with an acceptable detection rate.
https://doi.org/10.1142/9789811223334_0181
In digital era, almost everything is being automated with the aim of replacing hand-operated systems. In recent years, intelligent sensor systems are being used tremendously in agriculture. This witnesses the importance of Smart Greenhouse in the field of agriculture. In this field, a variety of sensors and technologies are used for remote monitoring and vision controlling. With these technologies, farmers instead of monitoring by seeing in the greenhouse for hours and hours, can stay home and control sensors online. By using these technologies one can easily monitor plant growth state in real-time. In this paper we develop a vision- and sensor-based control system for Smart Greenhouse. The main objective of this paper is to develop a multi-functional, easy to control, microcontroller-based system to monitor and record the measured values/metrics of environmental parameters such as temperature, humidity, soil moisture, and air quality of the Greenhouse. The environmental parameters are continuously modified and controlled by the system in order to optimize them in view of achieving maximum plant growth and yield/supply.
https://doi.org/10.1142/9789811223334_0182
Major landslides occur worldwide in large areas every year, thereby damaging human life and property and thus affecting national welfare/well-being. Landslide is one of the most costly catastrophic events in terms of human lives and infrastructure damage. Therefore, designing and implementing an early warning monitoring system for landslide becomes a significant issue. This paper is essentially focused on the design of a warning monitoring system. The outcome of the design leads to a model for monitoring the state of the earth’s soil and controlling landslide processes.
https://doi.org/10.1142/9789811223334_0183
The anisotype HJ 𝜌-cu2ZnSnSe4/ n-GaAs was obtained, for the first time, by selenization of base metal layers that were previously thermally deposited on a GaAs substrate. The heterostructures created discuss both current-voltage properties and current transmission mechanisms. The forward bias was found to be characterized by currents limited by space charge, tunneling and recombination processes. In the case of reverse bias, currents predominate, which are limited by the space charge in mobility mode.
https://doi.org/10.1142/9789811223334_0184
In this work we simulate the dependence of the amplitude of the random telegraph noise (RTN) on the gate overdrive (overload) for SOI FinFET with a cross section of rectangular and trapezoidal channels. It is shown that in the sub-threshold area when a single interface charge is detected in the middle of the RTL interface, the amplitude of the RTN is much higher for the cross section of the trapezoidal channel. In contrast, the amplitude of RTN is higher for a rectangle than for a cross section of a trapezoidal channel when the charge of an interface is detected at the upper channel interface. We also consider the dependence of the RTN amplitude on the position along the transistor channel, the charge of a single interface, which is detected at the upper interface, and on the side channel interface.
https://doi.org/10.1142/9789811223334_bmatter
The following section is included:
Sample Chapter(s)
Preface
Multi-view clustering via multiple kernel concept factorization