Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    A Data-Driven Model to Construct the Influential Factors of Online Product Satisfaction

    Online shopping is becoming more prevalent, with consumers turning to e-commerce platforms to search for information about the goods and services they need. Users will usually check other consumer reviews on the platform as a reference while shopping. Online retailers can collect and analyze these online reviews to monitor consumer opinions about product quality, logistics services, packaging and other attributes to provide an accurate basis for product improvement and service optimization. This paper applies the Latent Dirichlet Allocation (LDA) algorithm to extract the critical factors that affect consumer satisfaction. More than 30,000 reviews of seven kinds of 3C (computer, communication, and consumer electronic) product categories obtained by crawler technology are analyzed. Then, the DEMATEL-ANP (DANP) method is applied to the extracted framework to build a cause-and-effect diagram of 3C product satisfaction model. The innovative LDA-DANP hybrid model clarifies the causal influence of the evaluation dimensions for 3C products sold online. The results show that brand value is the most important dimension affecting consumer online product satisfaction. Appearance design, logistics awareness service and product performance also have a positive influence on perceived service and brand value. Finally, some management implications and practical suggestions are proposed.

  • articleNo Access

    A WORD POSITION-RELATED LDA MODEL

    LDA (Latent Dirichlet Allocation) proposed by Blei is a generative probabilistic model of a corpus, where documents are represented as random mixtures over latent topics, and each topic is characterized by a distribution over words, but not the attributes of word positions of every document in the corpus. In this paper, a Word Position-Related LDA Model is proposed taking into account the attributes of word positions of every document in the corpus, where each word is characterized by a distribution over word positions. At the same time, the precision of the topic-word's interpretability is improved by integrating the distribution of the word-position and the appropriate word degree, taking into account the different word degree in the different word positions. Finally, a new method, a size-aware word intrusion method is proposed to improve the ability of the topic-word's interpretability. Experimental results on the NIPS corpus show that the Word Position-Related LDA Model can improve the precision of the topic-word's interpretability. And the average improvement of the precision in the topic-word's interpretability is about 9.67%. Also, the size-aware word intrusion method can interpret the topic-word's semantic information more comprehensively and more effectively through comparing the different experimental data.

  • articleNo Access

    Multi-Channel Mapping Image Segmentation Method Based on LDA

    In order to improve the segmentation accuracy of plant lesion images, multi-channels segmentation algorithm of plant disease image was proposed based on linear discriminant analysis (LDA) method’s mapping and K-means’ clustering. Firstly, six color channels from RGB model and HSV model were obtained, and six channels of all pixels were laid out to six columns. Then one of these channels was regarded as label and the others were regarded as sample features. These data were grouped for linear discrimination analysis, and the mapping values of the other five channels were applied to the eigen vector space according to the first three big eigen values. Secondly, the mapping value was used as the input data for K-means and the points with minimum and maximum pixel values were used as the initial cluster center, which overcame the randomness for selecting the initial cluster center in K-means. And the segmented pixels were changed into background and foreground, so that the proposed segmentation method became the clustering of two classes for background and foreground. Finally, the experimental result showed that the segmentation effect of the proposed LDA mapping-based method is better than those of K-means, ExR and CIVE methods.

  • articleNo Access

    A Hybrid Fuzzy System via Topic Model for Recommending Highlight Topics of CQA in Developer Communities

    Question-answering (QA) websites supply a quickly growing source of useful information in numerous areas. These platforms present novel opportunities for online users to supply solutions, they also pose numerous challenges with the ever-growing size of the QA community. QA sites supply platforms for users to cooperate in the form of asking questions or giving answers. Stack Overflow is a massive source of information for both industry and academic practitioners, and its analysis can supply useful insights. Topic modeling of Stack Overflow is very beneficial for pattern discovery and behavior analysis in programming knowledge. In this paper, we propose a framework based on the Latent Dirichlet Allocation (LDA) algorithm and fuzzy rules for question topic mining and recommending highlight latent topics in a community question-answering (CQA) forum of developer community. We consider a real dataset and use 170,091 programmer questions in the R language forum from the Stack Overflow website. Our result shows that LDA topic models via novel fuzzy rules can play an effective role for extracting meaningful concepts and semantic mining in question-answering forums in developer communities.

  • articleNo Access

    Features-Level Fusion of Reflectance and Illumination Images in Finger-Knuckle-Print Identification System

    In Finger-Knuckle-Print (FKP) recognition, feature extraction plays a very important role in the overall system performance. This paper merges two types of the histograms of oriented gradients (HOG)-based features extracted from reflectance and illumination images for FKP-based identification. The Adaptive Single Scale Retinex (ASSR) algorithm has been used to extract the illumination and the reflectance images from each FKP image. Serial feature fusion is used to form a large feature vector for each user, and extract the distinctive features in the higher-dimension vector space. Finally, the cosine similarity distance measure is used for classification. The Hong Kong Polytechnic University (PolyU) FKP database is used during all of the tests. Experimental results show that our proposed system achieves better results than other state-of-the-art system.

  • articleNo Access

    Intelligent Analysis and Positioning of Political Public Opinion in Universities

    With the rapid development of Internet technology, the network has become an indispensable way of life for undergraduates. The correct guidance of public opinion has also become an important thing in the ideological work of universities. Undergraduates are in an important period of formation and development of thoughts that they are easily to be incited by cyber-rumors. Therefore, it is particularly important to obtain the data of political public opinion in universities and position the hot topics for early detection of political public opinion tendency, which can also avoid the outbreak of major security incidents. With such consideration, this paper obtains multi-source political public opinion data from BBS, Tieba and Weibo of SUN YAT-SEN UNIVERSITY (SYSU) through crawler. We study a text feature extraction method based on Word2Vec & LDA (Latent Dirichlet Allocation), which improves the high-dimensional sparsity in traditional Vector Space Model (VSM) text representation. Meanwhile, based on the classical Single-pass clustering algorithm, this paper studies the Single-pass & HAC clustering algorithm. In addition, a measurement method of hot topic is defined to calculate the heat value of political public opinion. Dictionary and rule based method is used to improve the accuracy of sentiment tendency analysis. The experimental results demonstrate that the effect of topic detection and positioning based on LDA & Word2Vec and Single-pass & HAC algorithm is better than other methods.

  • articleNo Access

    THE PROBLEM OF THE BAND GAP IN LDA CALCULATIONS

    In calculating band structure, the local density approximation and density functional theory are widely popular and do reproduce a lot of the basic physics. Regrettably, without some fine tuning, the local density approximation and density functional theory do not generally get the details of the experimental band structure correct, in particular the band gap in semiconductors and insulators is generally found to be too small when compared with experiment. For experimentalists using commercial packages to calculate the electronic structure of materials, some caution is indicated, as some long-standing problems exist with the local density approximation and density functional theory.

  • articleNo Access

    STRUCTURAL PHASE TRANSITION, ELASTIC AND ELECTRONIC PROPERTIES OF CuXSe2(X = In, Ga) CHALCOPYRITE

    The structural, elastic and electronic properties of chalcopyrite compound CuInSe2 and CuGaSe2 have been investigated using the full-potential linearized muffin-tin orbital method (FP-LMTO) within the frame of density functional theory (DFT). In this approach, the local density approximation is used for the exchange-correlation potential using Perdew–Wang parametrization. The equilibrium lattice parameters, bulk modulus, transition pressure, elastic constants and their related parameters such as Poisson's ratio, Young modulus, shear modulus and Debye temperature were calculated and compared with available experimental and theoretical data. They are in reasonable agreement. In this paper the electronic properties are treated with GGA + U approach, which brings out the important role played by the d-state of noble metal (Cu) and give the correct nature of the energy band gap. Our obtained results show that both compounds exhibit semi-conductor behaviour with direct band gap.

  • articleOpen Access

    A COMPARATIVE RESEARCH ON G-HMM AND TSS TECHNOLOGIES FOR EYE MOVEMENT TRACKING ANALYSIS

    Eye movement analysis provides a new way for disease screening, quantification and assessment. In order to track and analyze eye movement scanpaths under different conditions, this paper proposed the Gaussian mixture-Hidden Markov Model (G-HMM) modeling the eye movement scanpath during saccade, combing with the Time-Shifting Segmentation (TSS) method for model optimization, and also the Linear Discriminant Analysis (LDA) method was utilized to perform the recognition and evaluation tasks based on the multi-dimensional features. In the experiments, 800 real scene images of eye-movement sequences datasets were used, and the experimental results show that the G-HMM method has high specificity for free searching tasks and high sensitivity for prompt object search tasks, while TSS can strengthen the difference of eye movement characteristics, which is conducive to eye movement pattern recognition, especially for search tasks.

  • articleNo Access

    A PIECEWISE-DEFINED SEVERITY DISTRIBUTION-BASED LOSS DISTRIBUTION APPROACH TO ESTIMATE OPERATIONAL RISK: EVIDENCE FROM CHINESE NATIONAL COMMERCIAL BANKS

    Following the Basel II Accord, with the increased focus on operational risk as an aspect distinct from credit and market risk, quantification of operational risk has been a major challenge for banks. This paper analyzes implications of the advanced measurement approach to estimate the operational risk. When modeling the severity of losses in a realistic manner, our preliminary tests indicate that classic distributions are unable to fit the entire range of operational risk data samples (collected from public information sources) well. Then, we propose a piecewise-defined severity distribution (PSD) that combines a parameter form for ordinary losses and a generalized Pareto distribution (GPD) for large losses, and estimate operational risk by the loss distribution approach (LDA) with Monte Carlo simulation. We compare the operational risk measured with piecewise-defined severity distribution based LDA (PSD-LDA) with those obtained from the basic indicator approach (BIA), and the ratios of operational risk regulatory capital of some major international banks with those of Chinese commercial banks. The empirical results reveal the rationality and promise of application of the PSD-LDA for Chinese national commercial banks.

  • articleNo Access

    An Entity Extraction and Categorization Technique on Twitter Streams

    As social media platforms have gained huge momentum in recent years, the amount of information generated from the social media sites is growing exponentially and gives the information retrieval systems a great challenge to extract the potential named entities. Researchers have utilized the semantic annotation mechanism to retrieve the entities from the unstructured documents, but the mechanism returns with too many ambiguous entities. In this work, the DBpedia knowledge base is adopted for entity extraction and categorization. To achieve the entity extraction task precisely, a two-step process is proposed: (a) train the unstructured datasets with Word2Vec and classify the entities into their respective categories. (b) crawl the web pages, forums, and other web sources to identifying the entities that are not present in the DBpedia. The evaluation shows the results with more precision and promising F1 score.

  • articleNo Access

    MONOTONOUS TASKS AND ALCOHOL CONSUMPTION EFFECTS ON THE BRAIN BY EEG ANALYSIS USING NEURAL NETWORKS

    An analysis of the Electroencephalogram (EEG) signals while performing a monotonous task and drinking alcohol using principal component analysis (PCA), linear discriminant analysis (LDA) for feature extraction and Neural Networks (NNs) for classification is proposed. The EEG is captured while performing a monotonous task that can adversely affect the brain and possibly cause stress. Moreover, we investigate the effects of alcohol on the brain by capturing the data continuously after consumption of equal amounts of alcohol. We hope that our work will shed more light on the relationship between such actions and EEG, and investigate if there is any relation between the tasks and mental stress. EEG signals offers a rare look at brain activity, while, monotonous activities are well known to cause irritation which may contribute to mental stress. We apply PCA and LDA to characterize the change in each component, extract it and discriminate using a NN. After experiments, it was found that PCA and LDA are effective analysis methods in EEG signal analysis.

  • chapterNo Access

    Automatic analysis of microblogging data to aid in emergency management

    Microblogging platforms like Twitter, in the recent years, have become one of the important sources of information for a wide spectrum of users. As a result, these platforms have become great resources to provide support for emergency management. During any crisis, it is necessary to sieve through a huge amount of social media texts within a short span of time to extract meaningful information from them. Extraction of emergency-specific information, such as topic keywords or landmarks or geo-locations of sites, from these texts plays a significant role in building an application for emergency management. This paper thus highlights different aspects of automatic analysis of tweets to help in developing such an application. Hence, it focuses on: (1) identification of crisis-related tweets using machine learning, (2) exploration of topic model implementations and looking at its effectiveness on short messages (as short as 140 characters); and performing an exploratory data analysis on short texts related to crises collected from Twitter, and looking at different visualizations to understand the commonality and differences between topics and different crisis-related data, and (3) providing a proof of concept for identifying and retrieving different geo-locations from tweets and extracting the GPS coordinates from this data to approximately plot them in a map.

  • chapterNo Access

    EXPERIMENTAL CALIBRATION OF A SIMPLIFIED METHOD TO EVALUATE ABSOLUTE ROUGHNESS OF VEGETATED CHANNELS

    This chapter deals with the calibration of a new simplified experimental method to evaluate absolute roughness of vegetated channels. The method is based on boundary layer measurements in a short channel rather than on uniform flow measurements, as usual. The proposed method can be applied to any kind of rough bed, but it is particularly useful in vegetated beds where long channels are difficult to prepare. In this paper a calibration coefficient is experimentally obtained. In order to perform suitable comparisons with literature data relationships between ε absolute roughness and Manning's n coefficient are deepened. The results are successfully compared with literature experimental data with a very good fit. Finally, a particular dependence of ε values on the vegetation density are explained through further experiences. In conclusion it is possible to state that the proposed method, once calibrated, can provide reliable prediction of absolute roughness in vegetated channels.

  • chapterNo Access

    A Method of Modelling Software Evolution Confirmation Based on LDA

    This paper research a method that can confirm the software evolution based on Latent Dirichlet Allocation (LDA). LDA is a method that can analyze the interdependency among words, topics and documents, and the interdependency can be expressed as probability. In this paper, adoption of LDA to modeling software evolution, take the package in source code as a document, regard names of function (method), variable names and comments as words, and figure out the probability between the three. Take results compare with update reports, can confirm the software of new version consistent with update reports.