Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • chapterNo Access

    Feature Word Vector Based on Short Text Clustering

    A feature word vector based on short text clustering algorithm is proposed in this paper to solve the poor clustering of short text caused by sparse feature and quick updates of short text. Firstly, the formula for feature word extraction based on word part-of-speech (POS) weighting is defined and used to extract a feature word as short text. Secondly, the word vector that represents the semantics of the feature word was obtained through training in large-scale corpus with the Continuous Skip-gram Model. Finally, Word Mover’s Distance (WMD) was used to calculate similarity of short texts for short text clustering in the hierarchical clustering algorithm. The evaluation of four testing datasets revealed that the proposed algorithm is significantly superior to traditional clustering algorithms, with a mean F value of 55.43% on average higher than the second best method.

  • chapterNo Access

    A Short Text Similarity Measure Based on Hidden Topics

    Similarity measurement plays an important role in the classification of short text. However, traditional text similarity measures fail to achieve a high accuracy because the sparse features in short text. In this paper, we propose a new method based on the different number of hidden topics, which are derived through well-known topic models such as Latent Dirichlet Allocation (LDA). We obtain the related topics, and integrate the topics with the features of short text in order to decrease the sparseness and improve the word co-occurrences. Numerous experiments were conducted on the open data set (Wikipedia dataset) and the results demonstrated that our proposed method improves classification accuracy by 14.03% on the k-nearest neighbors algorithm (KNN). This indicates that our method outperforms other state-of-the-art methods which do not utilize hidden topics and validates that the method is effective.