Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Utilizing dynamics system to identify community structure has become an important means of research. In this paper, inspired by the relationship between topology structures of networks and the dynamic Potts model, we present a novel method that describes the conditional inequality forming simple community can be transformed into the objective function F which is analogous to the Hamilton function of Potts model. Likewise, to detect the well performance of partitioning we develop improved-EM algorithm to search the optimal value of the objective function F by successively updating the dynamic process of the membership vector of nodes which is also commonly influenced by the weighting function W and the tightness expression T. Via adjusting relevant parameters properly, our method can effectively detect the community structures. Furthermore, stability as the new measure quality method is applied for refining the partitions the improved-EM algorithm detects and mitigating resolution limit brought by modularity. Simulation experiments on benchmark and real-data network all give excellent results.
Human–computer interaction systems have been developed in large numbers and quickly applied to sports. Badminton is the best sport for applying robotics because it requires quick recognition and fast movement. For the development of badminton recognition and tracking systems, it is important to accurately identify badminton, venues, and opponents. In this paper, we designed and developed a badminton recognition and tracking system using two 2 000 000-pixel high-speed cameras. The badminton tracking system has a transmission speed of 250fps and the maximum speed of the badminton resonator is 300km/h. The system uses the camera link interface Camera Link to capture images of high-speed cameras and process all captured images in real time using different regions of interest settings. In order to improve accuracy, we propose a new method for judging the center point of badminton. We have proposed a detector that detects the four corner points of the field by using the contour information of the badminton court when the approximate position of the badminton court is known. We set the sensing area according to the approximate position of the badminton court and use the histogram in the sensing area to select the point closest to the contour. Specify the intersection of the line as the corner point of the badminton court. The proposed angle detector has a high detection rate. It is more than 10 times more accurate than traditional detectors. The moving badminton is detected by an elliptical detector. We propose a method to find the center of the correct ellipse from the four candidates by selecting the four candidate contours of the ellipse. Compared to conventional circular detectors and points on three-dimensional coordinates, the proposed elliptical detector reduces the error by about 3mm.
This paper proposes an integrated system neutrosophic C-means-based attribute weighting-kernel extreme learning machine (NCMAW-KELM) for medical data classification using NCM clustering and KELM. To do that, NCMAW is developed, and then combined with classification method in classification of medical data. The proposed approach contains two steps. In the first step, input attributes are weighted using NCMAW method. The purpose of the weighting method is twofold: (i) to improve the classification performance in the classification of the medical data, (ii) to transform from nonlinearly separable dataset to linearly separable dataset. Finally, KELM algorithm is used for medical data classification purpose. In KELM algorithm, four types of kernels, such as Polynomial, Sigmoid, Radial basis function and Linear, are used. The simulation result on our three datasets demonstrates that the sigmoid kernel is outperformed to ELM in most cases. From the results, NCMAW-KELM approach may be a promising method in medical data classification problem.
Pair-wise testing is a widely used strategy of software testing. It requires testing every pair of parameter values at least once. This paper focuses weighting of parameter values for this testing strategy. Weighting is an added feature which allows the tester to prioritize different parameter values by specifying their desired frequency of occurrence in a test suite. This feature is desirable as it allows the tester to have more control over the resulting test suite. However, there has been not much research on weighting: to our knowledge, all existing weighting methods treat weights as a second class requirement and cannot generate a test suite that sufficiently respects the given weights. Aiming to overcome this problem, this paper proposes a weighting method which can be used in combination of any one-test-at-a-time greedy test case generation algorithm. By comparing the parameter value distribution in the current test suite and the ideal one specified by the given weights, the method generates each test case so that the resulting test suite can reflect the weights as accurately as possible. The usefulness of the method is demonstrated through empirical results.
Fuzzy C-means (FCM) clustering algorithm is an important and popular clustering algorithm which is utilized in various application domains such as pattern recognition, machine learning, and data mining. Although this algorithm has shown acceptable performance in diverse problems, the current literature does not have studies about how they can improve the clustering quality of partitions with overlapping classes. The better the clustering quality of a partition, the better is the interpretation of the data, which is essential to understand real problems. This work proposes two robust FCM algorithms to prevent ambiguous membership into clusters. For this, we compute two types of weights: an weight to avoid the problem of overlapping clusters; and other weight to enable the algorithm to identify clusters of different shapes. We perform a study with synthetic datasets, where each one contains classes of different shapes and different degrees of overlapping. Moreover, the study considered real application datasets. Our results indicate such weights are effective to reduce the ambiguity of membership assignments thus generating a better data interpretation.
Data fusion consists of the process of integrating several datasets with some common variables, and other variables available only in partial datasets. The main problem of data fusion can be described as follows. From one source, having X0 and Y0 datasets (with N0 observations by multiple x and y variables, n and m of those, respectively), and from another source, having X1 data (with N1 observations by the same nx-variables), we need to estimate the missing portion of the Y1 data (of size N1 by m variables) in order to combine all the data into one set. Several algorithms are considered in this work, including estimation of weights proportional to the distances from each ith observation in the X1 "recipients" dataset to all observations in the X0 "donors" dataset. Or we can use a sample balancing technique with the maximum effective base performed by applying ridge-regression for the Gifi system of binaries obtained from the x-variables for the best fit of the "donors" X0 data to the margins defined by each respondent in the "recipients" X1 dataset. Then the weighted regressions of each y in the Y0 dataset by all variables in the X0 are constructed. For each ith observation in the dataset X0, these regressions are used for predicting the y-variables in the Y1 "recipients" dataset. If X and Y are the same n variables from different sources, the dual partial least squares technique and a special regression model with dummies defining each of the three available sets are used for prediction of the Y1 data.
In developing a firm’s environmental, social, and governance (ESG) performance indicator, we note that some criteria belong to more than one dimension. For example, “green energy” can be classified as environmental or social. We call such criteria “overlapping criteria.” The main issue is assigning appropriate weighting scores to overlapping and non-overlapping criteria. Another concern is that if there is a negative correlation between criteria, it is inappropriate to construct a composite indicator (CI). To resolve these shortcomings, we propose a two-level approach. The first level involves using data envelopment analysis (DEA) to find three-dimensional CIs. Each pillar of DEA includes the corresponding overlapping and non-overlapping criteria. The second level involves finding the aggregate score of the dimensional CIs by simple weighted average or DEA, depending on the dimension’s negative correlation. Due to the nature of DEA, some firms can use a few criteria or dimensions to achieve the best ESG performance, which deviates from the ESG development purpose. To resolve this issue, we propose to impose a set of sustainability constraints so that firms must consider all the criteria in a balanced manner and maintain flexibility to obtain a set of optimal DEA weighting scores.