![]() |
FLINS, an acronym originally for Fuzzy Logic and Intelligent Technologies in Nuclear Science, was inaugurated by Prof. Da Ruan of the Belgian Nuclear Research Center (SCK·CEN) in 1994 with the purpose of providing PhD and Postdoc researchers with a platform to present their research ideas in fuzzy logic and artificial intelligence. For more than 28 years, FLINS has been expanded to include research in both theoretical and practical development of computational intelligent systems.
With this successful conference series: FLINS1994 and FLINS1996 in Mol, FLINS1998 in Antwerp, FLINS2000 in Bruges, FLINS2002 in Gent, FLINS2004 in Blankenberge, FLINS2006 in Genova, FLINS2008 in Marid, FLINS2010 in Chengdu, FLINS2012 in Istanbul, FLINS2014 in Juan Pesoa, FLINS2016 in Roubaix, FLINS2018 in Belfast and FLINS2020 in Cologne, FLINS2022 was organized by Nankai University, and co-organized by Southwest Jiaotong University, University of Technology Sydney and Ecole Nationale Supérieure des Arts et Industries Textiles of University of Lille. This unique international research collaboration has provided researchers with a platform to share and exchange ideas on state-of-art development in machine learning, multi agent and cyber physical systems.
Following the wishes of Prof. Da Ruan, FLINS2022 offered an international platform that brought together mathematicians, computer scientists, and engineers who are actively involved in machine learning, intelligent systems, data analysis, knowledge engineering and their applications, to share their latest innovations and developments, exchange notes on the state-of-the-art research ideas, especially in the areas of industrial microgrids, intelligent wearable systems, sustainable development, logistics, supply chain and production optimization, evaluation systems and performance analysis, as well as risk and security management, that have now become part and parcel of Fuzzy Logic and Intelligent Technologies in Nuclear Science.
This FLINS2022 Proceedings has selected 78 conference papers that cover the following seven areas of interests:
Sample Chapter(s)
Preface
Prior knowledge modeling for joint intent detection and slot filling
Contents:
Readership: Researchers and engineers working on Fuzzy Logic and Intelligent Technologies.
https://doi.org/10.1142/9789811269264_fmatter
The following sections are included:
https://doi.org/10.1142/9789811269264_0001
Spoken Language Understanding (SLU), which is a crucial part of spoken dialogue systems, includes slot filling (SF) and intent detection (ID) tasks which are a high degree of correlation and influence each other. A joint learning model can share information between SF and ID to effectively improve experimental results. However, most exciting models do not use the information between SF and ID, making the model perform poorly. In this paper, we propose a novel based on the prior knowledge joint learning model for better utilizing the semantic information between SF and ID. The experimental results on three public datasets show that based on prior knowledge joint learning model can better express sentence semantic information and improve the accuracy of the ID and SF tasks.
https://doi.org/10.1142/9789811269264_0002
Microgrid technology provides an effective solution for industrial flexible electricity consumption and promotes the utilization of renewable energy. In this paper, to achieve the active and reactive power sharing in industrial microgrids with complex impedances, a distributed adaptive virtual impedance control method is proposed. For the power coupling, an impedance-power droop equation is proposed to generate virtual resistance and inductance, thereby eliminating the effect of mismatches among line impedances. It is highly resilient to power and line changes in industrial scenarios. On this basis, the practical consensus is used to obtain the desired power representing power sharing conditions. Finally, simulations have been carried out in MATLAB/Simulink to illustrate the validity of theoretical results.
https://doi.org/10.1142/9789811269264_0003
Thrust device adds flexibility to parafoil system. Controlling the flight height of the parafoil system through thrust is of great significance for the parafoil to complete the task. This paper applies a linear active disturbance rejection control (LADRC) method based on Deep Deterministic Policy Gradient (DDPG) optimization to the altitude control of the powered parafoil system. DDPG is used to obtain the adaptive parameters of LADRC, thus achieving better control performance. The simulation results verify the effectiveness of the proposed method by comparing it with traditional LADRC with fixed parameters.
https://doi.org/10.1142/9789811269264_0004
Research on the acoustic micro-nano manipulation starts from the discovery of Chladni effect. Acoustic manipulation is expected to be applied to the culture of biological tissue and cell, micro nano element assembling, allocation of chemical raw materials and other micro nano scale fields. Meanwhile, acoustic manipulation shows the characteristic of contactless, biocompatibility, environmental compatibility and functional diversity. Whereas, the accuracy and intelligence of acoustic manipulation still have a big gap to be crossed. Very recently, the method of the deep reinforcement learning is hotly discussed, which provides a new idea for micro nano manipulation. In this paper, the Deep Q Network(DQN) algorithm is considered to improve the efficiency and intelligence in the process of acoustic manipulation. As a demonstration, linear motion tasks based on acoustic wave are trained and displayed. Consequently, the accurate acoustic frequency sequence can be obtained to direct the actual process of acoustic manipulation.
https://doi.org/10.1142/9789811269264_0005
Data is usually hierarchically structured and can be aggregated at various levels in three dimensions of object, time and location. Different aggregations result in data of different granularities, and their relationships can be used for improving the learning ability of models. In this paper, we utilize one of them and establish a simple yet effective consistency constraint (CC) for the learning process: the sum of fine-grained forecasts should be equal to the corresponding coarse-grained forecast. Based on it, we propose a hierarchical reconciliation least square (HRLS) method for a group of linear regression problems. In order to evaluate the consistency, a new performance indicator is designed. Moreover, our method has been tested on both real-world and synthetic datasets, and compared with existing hierarchical and non-hierarchical methods. The experimental results demonstrate its superiority in terms of both forecast accuracy and hierarchy consistency. Finally, we note that the proposed HRLS can be explained as a new way of regularization, and the source code and data of this paper is online accessible at https://github.com/charlescc2019/HRLS.
https://doi.org/10.1142/9789811269264_0006
Automated Theorem Proving (ATP) is hard research of automated reasoning. The inference mechanism of most state-of-the-art first-order theorem provers is essentially a binary resolution method. The resolution method involves two clauses, and generates a clause with many literals in every deduction step, the search space will explode very quickly. Multi-clause standard contradiction separation (S-CS) calculus for first-order logic as a breakthrough of automated reasoning can restrict limitations. Based on S-CS rule, we propose a novel method called complementary ratio in this paper. Complementary ratio then is integrated into the leading ATP system Vampire, and we test the CASC-28 competition theorems (FOF division). The results show that complementary ratio can improve the performance of CS-based prover and Vampire.
https://doi.org/10.1142/9789811269264_0007
To achieve optimal path following performance of the parafoil system, we propose a real-time control method based on double deep Q-network (DQN) optimized active disturbance rejection control (ADRC). This method can choose the best parameters for ADRC of the system at different states using double DQN. The tracking performance of the parafoil system is evaluated under environment disturbances. The results show that the ADRC with adaptive parameters optimized by double DQN performs well under external interference and inherent uncertainty. Moreover, compared with the traditional ADRC, the proposed method has better control effects.
https://doi.org/10.1142/9789811269264_0008
Transmission line icing prediction can effectively reduce the loss of large-area power grid paralysis caused by icing. The recently proposed model called Informer can be used for transmission line icing prediction. Informer improves self-attention mechanism, reduces memory usage, and speeds up prediction. However, its accuracy is not high in practical application. For the purpose of increasing the accuracy of transmission line icing prediction, we extend the informer model and improve its self-attention distillation mechanism, so that after the encoder module extracts deeper features, the dominant features in the main features are given higher weights. The experiments results on real dataset provided by China Southern Power Grid Corporation show that the proposed method can achieve smaller error and higher accuracy in the field of transmission line icing prediction compared with the traditional SVR and LSTM.
https://doi.org/10.1142/9789811269264_0009
Crowdfunding has become one of the hottest fields in internet finance in recent years, where the success rate of crowdfunding projects is as low as just 39.07%. Though various factors that may impact the success of a crowdfunding project have been investigated, the persuasion effect in the semantically rich project description, i.e., shaping the donors’ attitudes and ultimately influencing their donation decisions, has seldom been explored. Due to the intangibility and weak measurability of disclosed project information as well as the small-capital donors’ cognitive subjectivity and non-professionalism, the persuasion effect of project description information may play a significant role in impacting donors’ decisions. Yet current state-of-art studies have not identified the persuasion effect coupling with deep learning techniques. This paper proposes a hierarchical model to identify key persuasion dimensions and explore the learning of persuasive effect on crowdfunding. Specifically, the bottom-level model, i.e., PSM (persuasion score model), is proposed to identify persuasion dimensions to well unify and represent information from a comprehensive and hybrid perspective. The top-level model, i.e., PEM (persuasion effect model), is composed of a recurrent neural network to capture the complex persuasion inference in the description text. The proposed hierarchical model is evaluated with Indiegogo datasets along with baseline methods for comparison, i.e., demonstrating its outperformance. Moreover, by modeling the persuasion effect into a deep learning framework, the derived results imply good interpretability, showing its merit in supporting managerial practice.
https://doi.org/10.1142/9789811269264_0010
A sustainable production of high-quality agricultural products calls for personalized-rather than for massive- operations. The aforementioned (personalized) operations can be pursued by human-like reasoning applicable per case. The interest here is in agricultural grape robot harvest where a binary decision needs to be taken, given a set of ambiguous constraints represented by a Boolean lattice ontology of inequalities. Fuzzy lattice reasoning (FLR) is employed for decision making. Preliminary experimental results on expert data demonstrate the advantages of the proposed method including parametrically tunable, rule-based decision-making involving, in principle, either crisp or ambiguous measurements also beyond rule support; combinatorial decision-making is also feasible.
https://doi.org/10.1142/9789811269264_0011
In classification and decision making, combining classifiers is a common approach, forming what is known as a classifier ensemble. The idea behind this approach is to, through diversity, improve classification accuracy. In these systems, perhaps the most important part is combining the different outputs presented by each classifier. However, most approaches found in literature use simple methods like majority voting or weighted means as the combination method. In this paper, we will present new approaches to combine the outputs of classifiers in a classifier ensemble: Fuzzy Majority Voting and Fuzzy Plurality voting, which are fuzzy approaches to the classical majority and plurality voting. Results obtained show that both are promising methods to be used in these systems.
https://doi.org/10.1142/9789811269264_0012
Dryer section is the most energy-intensive part of the papermaking process, and the requirements for energy saving and consumption reduction in the dryer section is increasing with the constraints of the environment and energy. In order to reduce energy consumption, this paper proposes a method for constructing a paper dryer model based on digital twin: using the chemical simulation software CADSIM Plus combined with the dryer mechanism model to build a digital twin model. On this basis, the genetic algorithm is used to optimize its energy consumption. The results show that the optimized drying process parameters can achieve the effect of saving energy and reducing consumption.
https://doi.org/10.1142/9789811269264_0013
Computed tomography (CT) is the primary method for the diagnosis of pancreatic cancer. Accurately segmenting the pancreas from abdominal CT images has significant medical value. However, the complex objective characteristics of the pancreas and the high cost of material and human resources in the task of manually labeling medical images make pancreas image segmentation a challenging task. To solve these problems, we propose a nested U-Net network structure with an integrated attention mechanism, which can complete the capture and fusion of global and local information without significantly increasing the computational cost. In addition to this, we propose a semi-supervised learning strategy used to update network parameters. The proposed method is evaluated on the public NIH pancreas dataset. Experimental results show that the DSC value of our proposed network structure is up to 89.13 under the fully supervised learning strategy, which is more promising than other advanced methods. Under the strategy of semi-supervised learning, only a small amount of the training data can achieve the segmentation performance similar to fully supervised learning.
https://doi.org/10.1142/9789811269264_0014
As an effective tool for analyzing human behavior and decision cognition, rule extraction is one of the important steps of knowledge discovery. In order to make decision with high confidence level and improve the rate of information acquisition in an uncertainty environment, this paper establishes a rule extraction algorithm of fuzzy linguistic concept knowledge under the fuzzy linguistic concept decision formal context. Introduce the weak consistence relationship in the decision context first. And then define the finer relationship between the conditional concept lattice and the decision concept lattice to obtain the consistent relationship between the fuzzy linguistic concept knowledge. Further, mine the implicit rules and their confidence degrees in the decision-making environment. Finally, taking the financial decision-making as an example to illustrate the effectiveness and practicability.
https://doi.org/10.1142/9789811269264_0015
With the rapid development of artificial intelligence and big data, people’s lives are becoming more and more intelligent, thus the utter importance of uncertainty is further recognized and various solutions are proposed successively. Linguistic value is a key tool to obtain information in cognition, decision-making and execution, which can reduce the loss of information in the process of expression. In order to better express the connotation of information, this paper introduces three kinds of linguistic truth-valued fuzzy negation operators and proposes the operation method based on linguistic truth-value logic systems. It can intensify people’s understanding of the negative connotation of relationship, so as to better acquire knowledge.
https://doi.org/10.1142/9789811269264_0016
Based on linguistic formal context with fuzzy object, this paper describes the relationship between object and linguistic concept, and constructs linguistic concept decision matrix with fuzzy object. At the same time, inspired by the classical TOPSIS decision method and vector operation, the TOPSIS decision method based on linguistic formal context with fuzzy object is proposed. The positive and negative ideal solutions are determined by linguistic concept decision matrix with fuzzy object, and the pseudo-distance and closeness degree between the object and the positive (negative) ideal solutions are calculated to select the most satisfactory alternative object.
https://doi.org/10.1142/9789811269264_0017
Transmission line tension prediction in the icing period is essential for developing anti-icing strategies for power grids. Most current time-series-based icing prediction models ignore external factors, such as micro-meteorological information and transmission line element information, In order to better deal with this problem, We propose a novel neural network architecture that uses multi-variance information and other auxiliary information to predict more accurately. The experiments results on real dataset show that our model improves prediction accuracy over existing solutions.
https://doi.org/10.1142/9789811269264_0018
Finding a D-optimal design to estimate model parameters can be challenging, especially when the model is complex and high-dimensional. Some evolutionary algorithms have been applied to tackle the problem but the design problems are for relatively simple statistical models. We employ several variants of differential evolution to find D-optimal designs for 5 different types of statistical models. Our simulation experiments show that the LSHADE variant outperforms other variants.
https://doi.org/10.1142/9789811269264_0019
This paper presents a generalized linguistic variable which can be viewed as an extension of the (ordinary) linguistic variable proposed by Zadeh. After analyzing the fuzzy sets FScom developed by Pan, we discover that the FScom is at least possessed with several shortcomings: 1) give any fuzzy set A, the medium negative fuzzy set of A is non-normal; 2) in FScom, the parameter λ is non-trivial, namely, the value of λ is not easy to be determined. In order to sketch the essential and intrinsic relationships between fuzzy knowledge and its different negation forms, we define a novel type of generalized fuzzy sets with contradictory, opposite and medium negation GFScom, and further explore several basic algebraic operations, properties and convexity and concavity with respect to GFScom. Moreover, we apply the generalized linguistic variable (or GFScom) to the Mamdani controller and suggest a novel form of fuzzy controller by considering three kinds of negation in a fuzzy system. A simple demonstration in a fuzzy system shows that the generalized linguistic variable (or GFScom) makes the fuzzy reasoning capability of fuzzy system much richer.
https://doi.org/10.1142/9789811269264_0020
With the development of communication technology, modeling and analysis of Petri nets(PN) with network environment have attracted the attention of researchers. This paper investigates the impact of event delay on the modeling and analysis of Petri nets with the help of semi-tensor product(STP). Firstly, Petri nets with fixed-step event delay is expressed by an algebraic form. Subsequently, networked reversibility is proposed for bounded Petri nets with fixed-step event delay, and its necessary and sufficient conditions are given by a matrix condition. Finally, an example is given to verify the validity of the theoretical results.
https://doi.org/10.1142/9789811269264_0021
This paper proposes a synchronization solution for model-free discrete-time leader-following systems based on the Asynchronous Advantage Actor-Critic (A3C) algorithm. The optimization object is a value function constructed by the consensus error. Furthermore, the multi-concurrency training method is applied to train the act net and the critic net, which are the nets responsible for generating optimal policies and estimating the value of the error-action pair. In this way, time-related data of the system is turned into independent and identically distributed data, ensuring the feasibility and speed of the algorithm. Finally, a simple simulation is provided to validate the efficiency of the proposed solution.
https://doi.org/10.1142/9789811269264_0022
In this paper, we investigate the problem of multi-agent Simultaneous Localization and Mapping (SLAM). We introduce an adaptive extended Kalman filter (AEKF), which enables the agent to estimate the noise information of the environment in real time, thus obtaining an accurate local map. At the same time, each agent interacts with its neighbor agents to calculate the global map by using a distributed information filter. The simulation results show that the map fusion algorithm with AEKF has better stability and precision than the traditional methods.
https://doi.org/10.1142/9789811269264_0023
Competitive relationships analysis is of great importance in the fiercely competing ecommerce marketplace. Due to the large-quantity and high homogeneity of competitive products, it is challenging for merchants to customize marketing tactics to highlight its products’ features with relative advantage to outperform others. This paper aims to design a machine learning method to analyze competitive relationships, including competitive entities identification and advantaged features detection. Specifically, this study incorporates an valuable data source reflecting consumers’ perspective, i.e., online reviews, together with data source from merchants’ perspective, i.e., the product descriptions, to capture the comprehensive competitive relationships on both product and feature levels. Furthermore, due to the multi-perspective of data sources, a heterogeneous network embedding method is developed. Data experiments and user experiments demonstrate the superiority of the proposed method.
https://doi.org/10.1142/9789811269264_0024
This paper aims to study consensus formation constraint set. A bonded consensus protocol for agents with continuous-time dynamics is put forward to achieve formation. The proposed protocol has a smooth bounded function, and composed of the formation part and the projection part. Besides, the correctness of protocol is proved by Lyapunov function. Finally, the effect of the algorithm is verified by a simulation.
https://doi.org/10.1142/9789811269264_0025
Entity Alignment (EA) aims to identify entities representing the same entity in the real world between two knowledge graphs. Recently, entity embedding-based models become the mainstream models of EA. But these models have the following shortcomings: (1) the ratio of seed alignments seriously affects the performance of EA and the acquisition of them often requires a lot of labor costs (2) entity embeddings don’t take into account the differences of different entities. To address these problems, an entity embedding-based model via contrastive learning is proposed for EA between KGs without utilizing prealigned seed entity pairs, which not only integrates entity attribute information in entity embeddings, but also enhances the discrimination between different entity embeddings. Experimental results on two real-world knowledge bases show that our proposed model has achieved a good improvement in the three common metrics for the entity alignment task, i.e., hits@1, hits@10, and MR.
https://doi.org/10.1142/9789811269264_0026
The main purpose of this paper is to solve the edge detection (ED) problem through Machine Learning (ML) techniques. ED is one of the main IP techniques, and it has found applications in a wide range of tasks. For this purpose, it is proposed a pixel-by-pixel classification approach. Some of the predictors employed to build the classifiers include information about the pixel neighborhood and structures of connected pixels called edge segments. This approach allows working with the edge information provided by the Canny algorithm. In this paper are used the first 50 images of the Berkeley segmentation data set (BSDS500). The performance of our pixel-by-pixel classification approach was tested with logistic regression, neural networks, and support vector machines. The results showed evaluation measures significatively higher than standard Canny’s, and this proved our pixel-by-pixel based classification as a promising approach able to improve edge detection performance.
https://doi.org/10.1142/9789811269264_0027
In a stochastic uncertainty environment, if the state transition of the mobile agent control system has only one preset plan, once this plan fails, the system will immediately enter the fault state. Therefore, multiple alternative plans can be provided in system design to improve system reliability, that is, alternatives can be implemented after the current plan fails. Considering the mobile agent control system in uncertain environment, this paper proposes a mobile agent control system with multiple alternative plans for system state transition. The nature of the system allows us to use one of the most recently developed open source model checkers for multi-agent system, MCMAS, to perform the model checking of safety verification task in designing the system. We formally model the proposed control system into Interpreted Systems Programming Language (ISPL) descriptions, which is actually the input of MCMAS. Finally MCMAS is used to validate the established ISPL model. The results show that the control system with multiple alternatives satisfies the required properties and can greatly improve the reliability of the system.
https://doi.org/10.1142/9789811269264_0028
Recently, in the context of complex production and construction environments, the detection of unsafe behavior becomes more and more necessary to ensure the safety of construction projects. In this paper, a multi-level pyramidal feature fusion network based on an attention mechanism is proposed for the detection and identification of helmets worn by personnel. To improve the detection speed and accuracy, the network uses a residual block structure design and introduces the ECAttention channel attention mechanism to achieve cross-channel interaction. By doing so, it significantly reduces the complexity of the model while maintaining a high level of performance. To verify the effectiveness of the proposed detection network, this study compares some outstanding detection methods, drawing on existing public datasets and images obtained from the Internet. The results show the proposed network’s detection efficiency is higher, demonstrating the ability to achieve real-time high-precision detection of helmets worn at production sites.
https://doi.org/10.1142/9789811269264_0029
In this paper, a control model is proposed to realize the coordinated control of traffic system composed of automatic intersection and intelligent vehicle. Firstly, conflict circle and vehicle motion model are proposed. Then the conflict model is constructed by using the circular contour of the vehicle. Avoid collisions only need to consider the constraint of two vehicles in the conflict circle. Finally, based on stability and traffic efficiency, a control model is proposed. The model takes the acceleration of the vehicle, the speed and time of entering the intersection as variables. The results show that compared with the traffic light control, the optimization strategy can effectively improve the traffic efficiency under the premise of meeting the safety and stability.
https://doi.org/10.1142/9789811269264_0030
The performance of automated theorem provers for large-scale mathematical problems is greatly reduced compared with smaller-scale problems, premise selection is one of the effective solutions. Since current graph neural networks usually aggregate information of neighbor nodes to update the feature representation of the central node, the order information between child nodes and nodes’ types in first-order logical formulas are ignored. To address the above problems, a new graph neural network model based on treelet and edge weights is proposed to encode first-order logical formulas in this paper. The experimental results show that the proposed model performs better in the premise selection task on the same dataset, which can improve the classification accuracy by about 2% compared with the best model of current graph neural network models.
https://doi.org/10.1142/9789811269264_0031
Neural Machine Translation (NMT) is the most widely used machine translation method. At the same time, in order to ensure translation quality, the NMT output needs to be post-edited by human translators. In this paper, NMT recurrent neural language model (RNN) is used to train relevant datasets, and the training results are used for corpus machine translation test to obtain machine translated texts. Based on the differences between Chinese and English languages and relevant translation standards, we construct quantitative criteria to evaluate the “understandability” of the machine translation output, and conduct manual post-editing based on the evaluation results to minimize the loss of understandability, so as to optimize the machine translation output. The research in this paper has some implications on how to combine machine translation efficiency and manual translation optimization in the process of artificial intelligence natural language processing to improve the quality of translation.
https://doi.org/10.1142/9789811269264_0032
This research utilizes several well-known Convolutional Neural Networks (CNNs) for facial expression recognition. By taking advantage of transfer learning, deep networks are able to perform a new classification task with a comparatively smaller training dataset. The experiment was efficiently executed by using these models to classify seven universally recognized emotions, i.e. neutral, happiness, sadness, angry, disgust, fear, and surprise. The models were also fine-tuned using a grid search strategy to identify optimal hyperparameter settings. Evaluated using the CK+ dataset, the transfer learning networks show reasonable performance.
https://doi.org/10.1142/9789811269264_0033
Plant diseases result in significant economic losses each year. The common plant diseases include early and late blight. As an example, early blight is caused by fungus while light blight is caused by a specific microorganism. If the plant diseases are detected in early stages with appropriate treatment, such economic loss could be prevented. Therefore, in this research, we propose an ensemble model combining three transfer learning networks, i.e. Resnet50, VGG-16, and MobileNetv2, for plant leaf disease identification. Evaluated using the Plant Village dataset, the proposed ensemble transfer learning model achieves impressive performance for the detection of healthy and unhealthy plant leaves with improved accuracy rates.
https://doi.org/10.1142/9789811269264_0034
Hazard prediction ability refers to a driver’s skill in anticipating and detecting potential road hazards. Drivers with good hazard prediction ability are able to effectively handle various traffic information of the road environment and evaluate predictive cues to help facilitate the early detection of hazards. Insight into the poor areas of hazard prediction ability for specific traffic scenarios provides drivers with valuable information about the kind of measures most urgently needed to improve their driving safety. In this study, a simulated driving experiment is conducted and the multiple layer DEA model is applied to assess drivers’ hazard prediction ability. On the basis of the results, those underperforming drivers are distinguished. Moreover, by analyzing the weights allocated to each indicator from the model, the most problematic scenario and indicator are identified for each driver, which leads up to specific driver improvement recommendations (such as training programs).
https://doi.org/10.1142/9789811269264_0035
Aiming at the problems of large data, high communication cost, vulnerable controller and poor expansibility which cause extra energy loss in the current centralized control of micro-grid, a novel topology of smart micro-grid for home applications is proposed. Moreover, a distributed optimization operation strategy based on Gossip algorithm is proposed to optimize the operation of micro-grid. In this strategy, the optimized operation of the micro-grid can be achieved only by exchanging information between adjacent controllers, while the central controller is not required. Therefore, the problems existing in centralized control can be effectively solved, and the control performance of the system is improved, which is beneficial to the micro-grid implementation of plug and play. Finally, the feasibility of the proposed micro-grid structure, model and operation method is verifed by the Matlab/Simulink simulation platform, simulation results are consistent with the consistent with the analysis. This paper aims to design a novel micro-grid for home applications to help achieve the goal of reducing carbon peaking at the micro level.
https://doi.org/10.1142/9789811269264_0036
With the gradual popularity of Android apps, smartphones have become an important source of privacy. While malicious apps are becoming increasingly rampant, even some seemingly ordinary apps may leak your private data at any time, so identifying and detecting malware plays an important role in mobile security.
However, existing deep learning-based malware approaches suffer from poor scalability and high experimental costs. This is due to the diverse and complex detection steps, especially in the software analysis and feature extraction phases.To solve the above problems, we propose a highly scalable full-process automation platform-ExpandDetector-which simplifies the analysis process of the original program by a custom repackaged framework, generating good feature forms to facilitate later construction of datasets and analysis work.
Finally, we tested on the malware dataset CIC-AAGM2017, and ExpandDetector does not exceed 5% of the original in size after repackaging. With only static features extracted, ExpandDetector analyzes a larger (up to 60MB in size) and a smaller (In the case of extracting only static features, ExpandDetector takes about 15 seconds and 3 seconds to perform a complete analysis of a larger (up to 60MB in size) and a smaller (up to 30MB in size) individually. In cases where both static and dynamic features need to be extracted, ExpandDetector outperforms existing methods by 5% to 15% in terms of the completeness of the extracted features.
https://doi.org/10.1142/9789811269264_0037
Metro security check risk evaluation plays an important role in identifying security check risks. The differential security check mode can improve passenger passage efficiency and metro service quality. In this paper, a multi-level comprehensive evaluation index system of differential security checks at stations is constructed, and the combination assignment method is selected as the method of determining index weights. Furthermore, a risk evaluation model of differential security checks in the metro based on the extension theory is established. The case study shows that the selected station of the Nanjing metro is in a relatively safe state.
https://doi.org/10.1142/9789811269264_0038
Autogenerated Advertisements (AGAs) can be a concern for consumers if they suspect that Artificial Intelligence (AI) was involved. Consumers may have an opposing stance against AI, leading companies to miss profit opportunities and reputation loss. Hence, companies need ways of managing consumers’ concerns. As a part of designing such advices we explore consumers’ discernment ability (DA) of AGAs. A quantitative survey was used to explore consumers’ DA of AGAs. In order to do this, we administered questionnaires to 233 respondents. A statistical analysis including Z-tests, of these responses suggests that consumers can hardly pick out AGAs. This indicates that consumers may be guessing and thus do not possess any significant DA of our AGAs.
https://doi.org/10.1142/9789811269264_0039
Aiming at the problem of poor fault diagnosis of rolling bearing, combined with the non-stationary characteristics of vibration signals, a bearing fault diagnosis method based on STFT-SPWVD and an improved convolutional neural network is proposed. Firstly, short-time Fourier transform and smoothed pseudo Wigner-Ville distribution are performed on the vibration signals of rolling bearings, and then through the analysis of the two methods, it is proposed to use the STFT-SPWVD method to obtain high time-frequency aggregation performance and no crossover. The time-frequency analysis results in distinct and distinct features. Finally, it is detected using an improved convolutional neural network. From the experimental results on the bearing dataset of Case Western Reserve University, it can be seen that the accuracy rate of the proposed method reaches 98.14%, which can better distinguish different faults.
https://doi.org/10.1142/9789811269264_0040
Customer requirements and specifications are becoming increasingly complex, resulting in more complicate production processes to meet these requirements. With this complexity come anomalies and deviations into processes. On the other hand, we are seeing a new generation of technology can handle complexity of process and discover unusual executions of large and complex processes by tracing their generated data and transforming it into insights and actions. Hence, using data in industry becomes inevitable, giving it a fundamental role in improving efficiency and effectiveness of any organization. However, it is not sufficient to store and analyze data, but also the ability to link it to operational processes and be able to pose right questions; moreover, a deep understanding of end-to-end processes, which may ultimately accelerate every aspect of detecting abnormal process executions and determining process parameters responsible for quality fluctuations. Anomaly detection in manufacturing has raised serious concerns. Any divergence in process may lead to a quality degradation in manufactured products, energy wastage and system unreliability. This paper proposes an approach for anomaly detection in electroplating processes by combining process steps ordering relationship and boosted decision tree classifiers, XGBoost system, that employs the dimensionality reduction using Kernel principal component analysis which turns out to be effective in handling nonlinear phenomena using the Gaussian kernel on a self-tuning procedure. This approach has ensured a good accuracy property while maintaining enough generalization characteristics to beat data size and complexity challenges in order to improve detection rate and accuracy. The approach was validated using a dataset representing electroplating executions in year 2021. The classified anomaly events produced by our approach can be used, for instance, as candidates for a generalized anomaly detection framework in electroplating.
https://doi.org/10.1142/9789811269264_0041
In this paper, a method for detecting the nucleus movement of oocytes during the enucleation process based on the mean drift algorithm is proposed, including the following steps: 1. Establish the target model ROIini and calculate the probability density histogram; 2. Establish the target candidate model ROIcandi and calculate the probability density histogram; 3. Use the Bhattacharyya coefficient to compare the similarity of the target model and the target candidate model; 4. Locate the moving target. This method is a universal nucleus motility detection method, which improves the limitations of the traditional mean drift target tracking algorithm, solves the problem of nucleus motion detection under the conditions of low microscopic image resolution, change in nucleus shape during enucleation, and large differences in the shape of different oocyte nuclei. This method can be used in the field of somatic cell nuclear transplantation, which can greatly improve the accuracy of oocyte enucleation, reduce cell damage, and further improve the development potential of recombinant cells.
https://doi.org/10.1142/9789811269264_0042
Consensus Reaching Processes (CRPs) aim at guaranteeing that the decision-makers (DMs) involved in a Group Decision-Making (GDM) problem achieve an agreed solution for the decision situation. Among other proposals to obtain such agreed solutions, the Minimum Cost Consensus (MCC) models stand out because of their reformulation of the GDM problem in terms of mathematical optimization models. Originally, MCC models were limited to compute agreed solutions from a simple distance measure that cannot guarantee to achieve a certain consensus threshold. This drawback was lately fixed by the Comprehensive MCC (CMCC) models, which include consensus measures in the classic MCC approach. However, some real-world problems require analyzing the feasibility of the DMs to choose a certain alternative regarding the others, namely, the cost of achieving an agreed solution on a certain alternative. For this reason, this contribution introduces new CMCC models that drive DMs to an agreed solution on a given alternative and, in such a way, it provides a method to analyze the cost and appropriateness of guiding such group to a specific solution.
https://doi.org/10.1142/9789811269264_0043
Consensus reaching processes (CRPs) try to reach an agreement among decision makers involved in a Group Decision Making (GDM) problem to obtain an accepted solution for all of them. In CRPs without feedback, Minimum Cost Consensus (MCC) models stand out among the consensus models because of their simplicity to achieve the consensus automatically with the minimum cost, that is, to change as less as possible the initial decision makers’ preferences. However, these MCC models cannot guarantee to achieve the consensus threshold, because they do not consider reaching a minimum consensus level amongst decision makers. To overcome this limitation, the Comprehensive MCC (CMCC) models have been recently proposed including a new constraint to achieve the consensus threshold. These models apply the same unit cost when the decision makers’ preferences are increased or decreased, and in some GDM situations, it should not be the same. Therefore, we propose to use asymmetric costs in the CMCC models by applying an asymmetric distance that considers the direction of the change. These models are called, asymmetric distance-based CMCC models and are developed to deal with fuzzy preference relations.
https://doi.org/10.1142/9789811269264_0044
For a long time, China’s transportation safety production situation has been generally stable. However, the situation is still grim, with frequent accidents, and the number of deaths and accidents in road traffic accidents is still high. Therefore, it will be of great use to analyze and study the causes of traffic accidents. The main work of this paper is to explore the correlation between accident factors and traffic accident severity. According to the relevant knowledge of machine learning, the influence and correlation of human, vehicle, road and environmental factors on the severity of traffic accidents are analyzed by using three correlation coefficients and the maximum information coefficient of statistics. The aim is to improve the current road safety situation and thus reduce the occurrence of traffic accidents. The results show that the severity of traffic accidents has the greatest correlation with the types of casualties and whether there is police intervention, and has a great correlation with pedestrians, the number of vehicles causing traffic accidents and the level of roads.
https://doi.org/10.1142/9789811269264_0045
With the continuous development of economy and the improvement of scientific and technological level, computer image processing technology has developed greatly in recent years, and is widely used in various industries. Image processing technology plays a supervisory role in environmental pollution and ecological damage to a great extent, making efficient use of resources and promoting the implementation of sustainable development policies. Our project is about how to realize the automatic recognition of train hydraulic brake oil level reading. In this regard, we propose a model algorithm based on FCOS and HSV algorithm. In addition, based on this model algorithm, we applied it to the reservoir water level recognition, meanwhile, realized the reservoir water level recognition algorithm. Automatic recognition technology replaces people’s inefficient and high-risk work, which plays a great role in promoting sustainable development.
https://doi.org/10.1142/9789811269264_0046
Accurate user preferences and item representations are essential factors for personalized recommender systems. Explicit feedback behaviors, such as ratings and free-text comments, are rich in personalized preference knowledge and emotional evaluation information. It is a direct and effective way to obtain individualized preference and item latent representations from these sources. In this paper, we propose a novel neural model named BERT-RS for personalized recommender systems, which extracts knowledge from textual reviews and user-item interactions. First, we preliminary extract the semantic representation for users and items from the textual comments based on BERT. Next, these semantic embeddings are used for user and item latent representations through three different deep architectures. Finally, we carry out personalized recommendation tasks through the score prediction based on these representations. Compared with other algorithms, BERT-RS demonstrates outstanding experimental performance on the Amazon dataset.
https://doi.org/10.1142/9789811269264_0047
To promote the cooperation between the Non-car operating carrier and road transportation enterprises, a modified revenue sharing contract is used in this paper to coordinate the logistics service supply chain involving non-car operating carrier under transport demand and cost disruption. And numerical examples are used to verify the theoretical results and analyze the impact of disruption management. We can obtain that the modified revenue sharing contract can achieve arbitrary allocation of supply chain profits.
https://doi.org/10.1142/9789811269264_0048
The research paper aims to determine the effectiveness of the MOSDMA (the multistage one-shot decision-making approach) in an application of a financial information technology project from the central bank of Oman. The study cases from this organization are the first to utilize the MOSDMA. Qualitative and quantitative data sources were gathered to reconstruct the problem and apply the MOSDMA. The results show the high effectiveness of the proposed approach in supporting the decision makers in re-evaluating such problems in actual practice. Moreover, such a scenario-based approach could bring confidence in, satisfaction with, and ownership of the decision, irrespective of the future outcomes.
https://doi.org/10.1142/9789811269264_0049
Artificial Intelligence (AI) and Machine Learning (ML) are shaping marketing activities through digital innovations. Competition is a familiar concept for any digital retailer, and the digital transformation provides hopes for gaining a competitive edge over competitors. Those who do not adopt digital innovations risk getting outcompeted by those who do. This study aims to identify AI marketing (AIM) adoptions used for ad optimization with Reinforcement Learning (RL). A scoped literature review is used to find ad optimization adoptions research trends with RL in AIM. Scoping this is important both to research and practice as it provides spots for novel adaptations and directions of research of digital ad optimization with RL. The results of the review provide several different adoptions of ad optimization with RL in AIM. In short, the major category is Ad Relevance Optimization that takes several different forms depending on the purpose of the adoption. The underlying found themes of adoptions are Ad Attractiveness, Edge Ad, Sequential Ad and Ad Criteria Optimization. In conclusion, AIM adoptions with RL is scarce, and recommendations for future research are suggested based on the findings of the review.
https://doi.org/10.1142/9789811269264_0050
The aim of this study is to examine the omnichannel capacity of the banking industries in E7 economies. For this purpose, quality function deployment approach is taken into the consideration. The analysis of this study consists of five different stages. Customer requirement dimensions are weighted in the first stage with the help of interval type-2 hesitant fuzzy DEMATEL method. On the other side, type-2 hesitant fuzzy TOPSIS approach is used in other stages to measure omnichannel capacity of the customer requirements, evaluate new service development process, assess the innovative channels and rank E7 countries with respect to the omnichannel performance. The main novelty of this study is to evaluate the omnichannel capacity of the service industry with a novel fuzzy decision-making model. In this process, the main reason of using type-2 hesitant fuzzy information is to model the hesitancy of the experts so that uncertainties in this process can be handled more effectively.
https://doi.org/10.1142/9789811269264_0051
This paper presented a new approach to garment fit assessment using probabilistic neural network, aiming to promote the implementation of garment e-mass customization in the new era of Industry 4.0. The proposed method was supported by several PNN models. The inputs of each PNN model were the garment ease allowance at the feature position as well as the parameters of fabric mechanical properties collected in a 3D virtual design environment. At the same time, the output of the PNN model was the real garment fitting data. The experimental results revealed that the present approach’s performance was feasible and could predict the fast and precise gar. Furthermore, a new interactive fashion design and manufacturing system for customized garments can be developed by using the proposed models.
https://doi.org/10.1142/9789811269264_0052
This paper introduces fabric texture dataset which contains 300 labelled images to facilitate research in developing presentation algorithm for this challenging scenario. The next contribution of this research is a novel multilevel deep dictionary learning-based fabric texture classification algorithm that can discern different kinds of texture. An efficient layer by layer training approach is formulated to learn the deep dictionaries followed by different classifiers as types of texture for fabric. By changing the number of layers in proposed algorithm, performances in different classifiers are compared. It is possible to integrate the proposed algorithm with real-time systems because it is supervised and has high classification accuracy with 93.6%.
https://doi.org/10.1142/9789811269264_0053
Human action recognition (HAR) has received extensive attention in artificial intelligence today. In view of the huge advantages of transformers in capturing global context information and extracting effective features compared with traditional deep neural networks, in this paper, we propose to utilize transformer to solve HAR problems and to our knowledge, this is the first time to solve HAR problems by transformer in sensor datasets. Experiments on 12 real sensor datasets with three evaluation metrics demonstrate the superiority of using transformer against four classic or state-of-the-art models.
https://doi.org/10.1142/9789811269264_0054
We introduce in this paper ResvidNet, a new architecture of one-dimensional convolution neural network (CNN) to COVID-19 detection. The proposed architecture consists of an enhance version of ResNet18. The results are compared with common one-dimensional Convolutional Neural Network, a Deep Neural Network, and ResNet18, and show a better performance of ResVidNet in term of accuracy and robustness, compared to the Neural Networks mentioned above.
https://doi.org/10.1142/9789811269264_0055
Fall has been recognized as the major cause of accidental death for people aged 65 and above. Timely prediction of fall risk can help identify the elderly prone to falls and trigger prevention interventions. Recent advancement in wearable sensor technology and big data analysis offers the opportunities of accurate, affordable, and easy-to-use approaches to fall risk prediction. In this paper, we focused on assessing the current state of body-worn sensor technology with machine learning methods for fall risk prediction. Fifteen out of 523 research articles were finally identified and included in this review. A systematic comparison was conducted from several aspects, including sensor types, functional tests, modeling methods, prediction effectiveness, etc. Additionally, we discussed the future trends of fall risk prediction via sensor technology, and highlighted several challenging issues encountered in the area.
https://doi.org/10.1142/9789811269264_0056
Poor sitting posture in a long term can severely threaten children’s physical and mental health. Clothes as an intimate but non-invasive existence on human body, is supposed to be a perfect carrier of wearable components by establishing personalized and meanwhile comfortable interaction between functional modules and the wearer. The current paper attempts to develop a knitwear based wearable system which is able to detect the wearer’s back bending and chest-desk distance, and then send instructive alert so as to help the kid to cultivate good sitting habits. According to users’ feedback to the prototype, the proposed wearable system is generally satisfactory, but further exploration is also needed to improve function and design.
https://doi.org/10.1142/9789811269264_0057
Polylactic acid (PLA) is one of the most promising green polymer and has a wide range of application in textile, food packaging, and plastics industries due to its sustainable, environment-friendly and biodegradable character. However, it is often restricted by its low crystallinity and crystallization rate, which affects the heat resistance and mechanical properties. The purpose of this research is to investigate the effect of nano-stereocomplex PLA (nano-scPLA) on the crystallization of Poly(lactic acid). The results show that nano-scPLA prepared from low molecular weight of poly(L-lactic acid) (PLLA) and poly(D-lactic acid) (PDLA) possesses a crystallinity of 53.9%, and most of them (98.8%) is stereocomplex crystal structure. This nano-scPLA has a prominent promotion of PLLA crystallization, and the crystallinity of the resulting composite films is increased to 48.5% from amorphous. Furthermore, their tensile strength is also increased from 17.5MPa to 60MPa. Based on the results, nano-scPLA is proved to be an efficient nucleating agent and reinforcement for PLA matrix. The prepared films are expected to be applied as a sustainable packaging materials.
https://doi.org/10.1142/9789811269264_0058
A novel DC-DC converter with high voltage gain for sustainable energy is proposed, which provides a new substituted topology for low and medium power applications fields where high-voltage conversion is required. The proposed Sepic-based converter combines a coupled-inductor voltage multiplier circuit, which can achieve higher voltage gain and lower voltage stress of power devices when the duty ratio and input voltage are same as the traditional Sepic converter. Moreover, the input current ripple in the proposed converter is decreased, which results in low voltage and high performance semiconductors devices, and then leads to the high efficiency and stability. In this paper, the proposed DC-DC converter is analysed and deduced in detail. Then, simulations and experimental results are presented to verify the feasibility of the proposed DC-DC converter.
https://doi.org/10.1142/9789811269264_0059
Note that bypass truck is used to repair power device online, however, it is difficult to drive a bypass truck into a complex narrow environment. Thus, a mobile bypass switch cabinet is required in such special conditions. Therein, a high-frequency converter with high power factor correction (PFC) is required to reduce the device size for mobility. Thus, a high-frequency input continuous conduction mode (CCM) power factor correction (PFC) converter based on LC resonance principle for AC-DC converter is proposed. In the case of high-frequency input, the converter achieves PFC of CCM through LC resonant network, and minimizes the switching frequency, thus reducing the switching loss. In this paper, the structure, working modes, circuit analysis and control method of the proposed converter are studied in detail. Finally, based on the actual conditions, the function and superiority of the topology are verified by simulation. The results verify that the converter can be applied to AC-DC converter in a bypass switch cabinet to achieve low switching loss and high power factor at high frequency input.
https://doi.org/10.1142/9789811269264_0060
This paper gives a comprehensive review on scientific and economic interests of intelligent computational techniques applied to construction of sustainable circular economy as well as the current methodologies and tools used and their cooperation with other digital tools such as IoT and cloud platform in the context of Industry 4.0. More emphasis has been placed on the areas of environmental impacts evaluation, remanufacturing and resource sustainability management and optimization, which are playing a key role in circular economy beyond classical manufacturing themes. Based on this review, a short analysis has been provided on the perspectives of this research theme in the future.
https://doi.org/10.1142/9789811269264_0061
Education for sustainable development (ESD) is critical to teenagers, who are regarded as future citizens. However, it is not easy to achieve because sustainability is normally merged into other subjects and does not attract as much attention as that of the traditional subjects. Moreover, the importance and education contents of ESD vary among schools and teachers, which leads to fluctuation in awareness levels. This study aims to design an Interactive Game-based Device to attract and establish a relatively unified modular platform for ESD education among teenagers from different regions. To prompt participation through an immersive experience, the role-acting performance, intelligent voice synthesis, and audio-visual feedback are applied in this device design. The preliminary studies show that the awareness of sustainability is increased through entertainment and interaction.
https://doi.org/10.1142/9789811269264_0062
The sustainability problem of the textile and apparel industry has always been a hot social issue. Among many sustainable strategies, the sustainable benefits brought by supply chain management are increasingly evident, among which supplier selection is the most critical part of each link of supply chain management. Integrating sustainability into the process of supplier selection increases the difficulty for apparel enterprises to choose suitable suppliers. This paper analyzes and integrates the criteria of a sustainable apparel supplier (SAS) selection system from the triple bottom line (TBL) perspective and proposes a sustainable selection method based on the triple bottom line principle. First, we systematically collect sustainable supplier selection criteria and establish a hierarchy of criteria suitable for the apparel industry. Then, the Fuzzy Analytic Hierarchy Process (FAHP) is used to determine the weight of sustainable supplier selection in the apparel industry. Finally, the potential suppliers were ranked by the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and a practical case verifies the feasibility of the model. This paper will provide apparel enterprises with a new idea of supplier selection based on the sustainable concept.
https://doi.org/10.1142/9789811269264_0063
Smart wearable is expected to give the elderly more peace of mind and experience in a comfortable home environment. This not only satisfies their desire to live relatively independently, but also avoids the waste of resources caused by the lack of timely detection and feedback of aging and health problems, and contributes to the sustainable development of society. In order to meet the current demand for high development of smart wearable products for the elderly at home, and to lay out the limited design resources on the key design factors so as to enhance the consumer experience, a design element system is constructed from a sustainable perspective. The textual measures of academic literature and product evaluations are used to analyze the relevant theoretical foundations and the current state of the market category products. Based on this, the user’s needs at the instinctive, behavioral and reflective levels are captured through expert panel interviews, and the initial requirement importance is calculated. The QFD quality house model is used to translate the needs into design factors and score them based on the relationship degree of the needs, and finally construct the design factor system of smart wearable for the elderly at home in a sustainable perspective.
https://doi.org/10.1142/9789811269264_0064
Standard contradiction separation rule in first-order logic breaks through binary and static properties which are two remarkable features of canonical resolution. Standard contradiction separation rule offers multiple advantages, such as multi-clause, dynamic abilities and guidance, etc. In order to further take advantage of these abilities, we propose a fully reusing clause method based standard contradiction separation rule and design a deduction algorithm based on the fully reusing clause method in this paper. This algorithm is applied to the leading first-order prover, Vampire, to form V_FRC, the feasibility and superiority of this algorithm are illustrated though an experiment.
https://doi.org/10.1142/9789811269264_0065
Clustering analysis is a significant technique of data mining. With the rise of lifelong learning, lifelong clustering has become a research topic. Lifelong clustering builds libraries shared among multiple tasks, and these tasks achieve effective information transmission by interacting with the shared knowledge libraries. However, selecting optimal hyper-parameters in the knowledge transfer process often employs the actual clustering division in the dataset as a reference, which is unavailable during the clustering process. Moreover, the hyper-parameters for each task are typically set to constant values because of computational difficulty. Therefore, this paper explores a clustering method based on Bayesian inference, where the parameter setting is priori information, and the clustering divisions obtained by the parameters are posteriori information. In our method, hyper-parameters corresponding to the maximum a posteriori (MAP) probability are selected in each task. Then, we apply this method to Lifelong Spectral Clustering to select hyper-parameters and propose a new algorithm, called Maximum a Posteriori Lifelong Spectral Clustering (MAPLSC). Finally, experiments on several real-world datasets show the effectiveness of our method and the average clustering performance of Lifelong Spectral Clustering is improved.
https://doi.org/10.1142/9789811269264_0066
Traffic accidents are still an important cause of death. Predicting the severity of possible traffic accidents is helpful to speed up the decision-making of accident treatment plans and reduce casualties. Therefore, it is expected to establish a sufficient and reliable severity prediction model in traffic accidents. Up to now, there are a lot of research about traffic accident prediction. But traditional methods are susceptible to noise and not efficient enough. To conquer this challenge, we utilized a multi-diversified clustering ensemble approach to predict traffic accidents. Finally, 12 real datasets and 7 algorithms are used to carry out extensive comparison experiments, whose results show that clustering results generated by MDEC HC have better robustness and accuracy.
https://doi.org/10.1142/9789811269264_0067
Multi-view clustering has been attracting the attention of researchers in recent years and is one of the popular machine learning and unsupervised learning techniques. In conventional multi-view clustering, it is challenging to handle multi-view clustering containing missing views, called incomplete multi-view clustering. To address this problem, we propose a novel Graph Learning for Incomplete Multi-view Spectral Clustering (namely GIMSC) algorithm to perform incomplete multi-view clustering tasks. GIMSC can simultaneously integrate individual graph learning, fusion graph learning and spectral clustering into a unified framework, which is able to learn the consensus representation shared by all views via incomplete graphs construction. GIMSC learns the adaptive local structure for all views pre-constructed by k-nearest neighbor. Then, we construct the fusion graph with auto-weighted learning to explore the consensus similarity matrix for incomplete graphs with different sizes, which will reduce the negative influence of outliers. We introduce an index matrix to achieve the transformation among incomplete and complete graphs with respect to each view. An iterative optimization algorithm is proposed to solve the optimization procedure. In experiments, we extensively conduct our method on four incomplete multi-view datasets, showing the proposed method outperforms the existing state-of-the-art methods.
https://doi.org/10.1142/9789811269264_0068
In real-world, there are vast class-imbalanced datasets, while most existing algorithms are designed for balanced classes. Furthermore, traditional data augmentation methods mostly need to utilize Markov chain and infer hidden variables during the training process. To break this situation, this paper designed a method of using Generative Adversarial Networks (GAN) to produce more data samples for classification task. GAN utilizes back propagation instead of Markov chain, in which the parameter update of generator is not directly from the data samples, but from discriminator. Finally, experiments on 10 datasets are conducted with 3 classification models and the results demonstrate the high performance of using algorithms to classify preprocessed data.
https://doi.org/10.1142/9789811269264_0069
With the advent of the big data era, the data quality problem is becoming more and more prominent, and data missing filling is one of the key techniques to improve data quality, which has attracted much attention from researchers. One of the typical research works is to use neural network models, such as autoencoders, but these methods are difficult to explore both data association features and data common features. To solve the above problems, a missing value filling model based on a feature fusion enhanced autoencoder is proposed. It designs a novel neural network hidden layer with the mutual enhancement of de-tracking neurons and radial basis function neurons. The de-tracking neurons can reduce the problem of invalid constant mappings and effectively explore the data association features; the automatic clustering capability of radial basis function neurons can better learn the data common features. And an automatic iterative optimization of the missing value dynamic clustering filling strategy is designed to achieve multidimensional feature fusion learning and dynamic collaborative filling. The effectiveness of the proposed model is verified by experimental comparison with traditional missing value filling methods on multiple datasets with different missing rates.
https://doi.org/10.1142/9789811269264_0070
In the context of Industry 4.0, a large amount of industrial data is collected, which provides a good basis for soft sensing modeling. However, industrial data may have potentially multiple working conditions that make the data vary locally. Therefore, the prediction performance of the global model largely depends on the division of training data and test data. To illustrate this, Gaussian mixture model (GMM) is used for data partitioning, and then training data and test data are obtained proportionally in different partitions. Finally, support vector regression (SVR) and multilayer perceptron (MLP) are built under different training and test data to observe the changes of R2, RMSE and MAPE. The results show that model’s performance is largely affected by partitioning, and in order to obtain stable and usable models, data partitioning needs to be reasonably considered.
https://doi.org/10.1142/9789811269264_0071
Previous assessment models based on computational intelligence for virtual reality simulators did not consider intervalar-valued data, which can be modelled by trapezoidal distribution. This work presents the proposal of a new Fuzzy Trapezoidal Naive Bayes Network as basis for a single user assessment system (SUAS) to be used in virtual reality simulation for training purposes. The results showed that the assessment system based on the trapezoidal distribution was able to achieve better results when compared to other SUAS based on different Naive Bayes Networks.
https://doi.org/10.1142/9789811269264_0072
This paper proposes a new approach for the ELECTRE III method based on the linguistic 2-tuple fusion model for dealing with heterogeneous information. It provides a flexible evaluation framework in which decision-makers can supply their preferences using different information domains conform to the nature and uncertainty of criteria and their level of knowledge and experience. The new method uses a linguistic-based distance measure appropriate for multicriteria ranking problems. The feasibility and applicability of linguistic ELECTRE III are illustrated in an example for selecting a green supplier.
https://doi.org/10.1142/9789811269264_0073
Medical named entity recognition (NER) is the pivotal pre-technology of medical knowledge graph (MKG) construction, but the existing methods are difficult to take into account the contextual information and the terms with different granularities in Chinese medical texts (CMT). Based on this, a NER model CMG-CRF for CMT is proposed, which extracts features with different granularities through StackCNN, and recognizes entities in combination with the contextual features obtained by BiGRU. The experimental results show that the proposed model has an average F1-score of 92.54% on the Chinese medical dataset, which performs better than the benchmark models.
https://doi.org/10.1142/9789811269264_0074
In the text clustering algorithm, the correlation between words which are extracted by the TFIDF is ignored, resulting in unclear geometric meaning of text vectorization and poor interpretability of clustering algorithms. In this paper, an improved clustering algorithm is proposed. According to the co-occurrence fuzzy relationship between words, the algorithm obtains co-occurrence keywords through the neighborhood systems of words, so as to enhance the relevance between words. Then, the text vectorization is performed using the word frequency which implies the importance of the word in the text. Each word is used as a feature axis in the feature space, and the frequency of the word is used as the size of the corresponding component of the vector which enables text vectorization to have a definite size and direction. Finally, the weighted combination of cosine distance and Euclidean distance is used to construct the clustering objective function. Compared with similar references, experimental results show that the algorithm can significantly improve the accuracy and recall of clustering.
https://doi.org/10.1142/9789811269264_0075
Preprocessing techniques play a great role in efficient propositional solving and clause elimination methods are significant parts of them, which speed up SAT solvers’ solving process by deleting redundant clauses in CNF formulas without influencing the satisfiability or unsatisfiability of the original formulas. In this paper, a novel theoretical principle of clause elimination multi-literal implication modulo resolution (MIMR) is put forward. Its soundness proof is also given, to prove any clause satisfying the principle of MIMR is redundant. Besides, effectiveness of MIMR is discussed compared with that of implication modulo resolution (IMR), which is higher than that of IMR.
https://doi.org/10.1142/9789811269264_0076
Laplacian Eigenmaps (LE) is a widely used dimensionality reduction and data reconstruction method. When the data has multiple connected components, the LE method has two obvious deficiencies. First, it might reconstruct each component as a single point, resulting in loss of information within the component. Second, it only focuses on local features but ignores the location information between components, which might cause the reconstructed components to overlap or to completely change their relative positions. To solve these two problems, this paper first modifies the optimization objective of the LE method, proposes to describe the relative position between different components of data by using the similarity between high-density core points, and solves optimization problem by using gradient descent method to avoid the over-compression of data points in the same connected component. A series of experiments on synthetic data and real-world data verify the effectiveness of the proposed method.
https://doi.org/10.1142/9789811269264_0077
Yager-preference-involved decision making and aggregation methods proved to be quite flexible and are widely applied in numerous areas. This work discusses the preference involved evaluation in some detailed scenarios of large-scale group decision making. In the proposed evaluation frame, we separately consider the evaluation information provided by consultants and the preference information offered by respondents. The real value and probability information for the frame are both analyzed.
https://doi.org/10.1142/9789811269264_0078
Ancient painting is a precious cultural heritage. Unfortunately, these paintings may fade, darken and crack due to natural or human factors. Therefore, it is urgent to protect painting. In this paper, virtual fitting technology and reverse engineering technology were used to restore the three-dimensional virtual clothing in the paintings of ladies of the Ming Dynasty. 2D paintings were converted into 3D models, which creatively show the costumes in the paintings. Based on the structural characteristics of clothing, we used reverse engineering technology to obtain the two-dimensional patterns of the clothing top, and used the method of flat pattern-making to obtain the two-dimensional patterns of the clothing bottom. We completed the structural restoration and fabric restoration of the garment. Based on the research on color, pattern, and fabric physical properties, we have completed the fabric restoration of clothing. On this basis, the Analytic Hierarchy Process-fuzzy comprehensive evaluation model was used to analyze and evaluate the modeling effect. Research shows that it is feasible to develop the patterns of garments by combining reverse engineering technology and flat pattern-making technology. This method provides a new idea for the protection and development of paintings, and can be used for three-dimensional restoration and display of ancient clothing-type paintings.
https://doi.org/10.1142/9789811269264_bmatter
The following section is included:
Qinglin Sun received the BS and MS degrees from Tianjin University, Tianjin, China, in 1985 and 1990, respectively, both in control theory and control engineering, and the PhD degree in control science and engineering from Nankai University, Tianjin, China, in 2003. He is currently a Professor with the College of Artificial Intelligence, Nankai University, Tianjin, China. His research interests include adaptive control, modeling and control of flexible spacecraft, and embedded control systems.
Sample Chapter(s)
Preface
Prior knowledge modeling for joint intent detection and slot filling