Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Microblogging platforms like Twitter, in the recent years, have become one of the important sources of information for a wide spectrum of users. As a result, these platforms have become great resources to provide support for emergency management. During any crisis, it is necessary to sieve through a huge amount of social media texts within a short span of time to extract meaningful information from them. Extraction of emergency-specific information, such as topic keywords or landmarks or geo-locations of sites, from these texts plays a significant role in building an application for emergency management. This paper thus highlights different aspects of automatic analysis of tweets to help in developing such an application. Hence, it focuses on: (1) identification of crisis-related tweets using machine learning, (2) exploration of topic model implementations and looking at its effectiveness on short messages (as short as 140 characters); and performing an exploratory data analysis on short texts related to crises collected from Twitter, and looking at different visualizations to understand the commonality and differences between topics and different crisis-related data, and (3) providing a proof of concept for identifying and retrieving different geo-locations from tweets and extracting the GPS coordinates from this data to approximately plot them in a map.
Negative online reviews have become essential decision-making information for businesses. By conducting text mining on negative online reviews of e-commerce platforms to accurately identify problems in online platform transactions, using social network analysis to clarify the correlation between critical factors in negative reviews, and applying the LDA topic model to mine eight significant themes of negative reviews, namely platform rider disputes, education refund difficulties, difficulty in canceling or changing reservations, damage or loss of goods, taxi disputes, payment harassment complaints, platform member disputes, and slow response on customer service. This chapter is of great significance for improving the quality of products and services, enhancing customer satisfaction, and effectively regulating e-commerce platforms by the government.
This chapter deals with the calibration of a new simplified experimental method to evaluate absolute roughness of vegetated channels. The method is based on boundary layer measurements in a short channel rather than on uniform flow measurements, as usual. The proposed method can be applied to any kind of rough bed, but it is particularly useful in vegetated beds where long channels are difficult to prepare. In this paper a calibration coefficient is experimentally obtained. In order to perform suitable comparisons with literature data relationships between ε absolute roughness and Manning's n coefficient are deepened. The results are successfully compared with literature experimental data with a very good fit. Finally, a particular dependence of ε values on the vegetation density are explained through further experiences. In conclusion it is possible to state that the proposed method, once calibrated, can provide reliable prediction of absolute roughness in vegetated channels.
Classification techniques are routinely utilized on satellite images. Pansharpening techniques can be used to provide super resolved multispectral images that can improve the performance of classification methods. So far, these pansharpening methods have been explored only as a preprocessing step. In this work we address the problem of adaptively modifying the pansharpening method in order to improve the precision and recall figures of merit of the classification of a given class without significantly deteriorating the performance of the classifier over the other classes. The validity of the proposed technique is demonstrated using a real Quickbird image.
This paper research a method that can confirm the software evolution based on Latent Dirichlet Allocation (LDA). LDA is a method that can analyze the interdependency among words, topics and documents, and the interdependency can be expressed as probability. In this paper, adoption of LDA to modeling software evolution, take the package in source code as a document, regard names of function (method), variable names and comments as words, and figure out the probability between the three. Take results compare with update reports, can confirm the software of new version consistent with update reports.