Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The structural, elastic and electronic properties of chalcopyrite compound CuInSe2 and CuGaSe2 have been investigated using the full-potential linearized muffin-tin orbital method (FP-LMTO) within the frame of density functional theory (DFT). In this approach, the local density approximation is used for the exchange-correlation potential using Perdew–Wang parametrization. The equilibrium lattice parameters, bulk modulus, transition pressure, elastic constants and their related parameters such as Poisson's ratio, Young modulus, shear modulus and Debye temperature were calculated and compared with available experimental and theoretical data. They are in reasonable agreement. In this paper the electronic properties are treated with GGA + U approach, which brings out the important role played by the d-state of noble metal (Cu) and give the correct nature of the energy band gap. Our obtained results show that both compounds exhibit semi-conductor behaviour with direct band gap.
The availability of camera phones provides people with a mobile platform for decoding bar codes, whereas conventional scanners lack mobility. However, using a normal camera phone in such applications is challenging due to the out-of-focus problem. In this paper, we present the research effort on the bar code reading algorithms using a VGA camera phone, NOKIA 7650. EAN-13, a widely used 1D bar code standard, is taken as an example to show the efficiency of the method. A wavelet-based bar code region location and knowledge-based bar code segmentation scheme is applied to extract bar code characters from poor-quality images. All the segmented bar code characters are input to the recognition engine, and based on the recognition distance, the bar code character string with the smallest total distance is output as the final recognition result of the bar code. In order to train an efficient recognition engine, the modified Generalized Learning Vector Quantization (GLVQ) method is designed for optimizing a feature extraction matrix and the class reference vectors. 19 584 samples segmented from more than 1000 bar code images captured by NOKIA 7650 are involved in the training process. Testing on 292 bar code images taken by the same phone, the correct recognition rate of the entire bar code set reaches 85.62%. We are confident that auto focus or macro modes on camera phones will bring the presented method into real world mobile use.
Computerized tongue diagnosis can make use of a number of pathological features of the tongue. To date, there have been few computerized applications that focus on the very commonly used and distinctive diagnostic and textural features of the tongue, Fungiform Papillae Hyperplasia (FPH). In this paper, we propose a computer-aided system for identifying the presence or absence of FPH. We first define and partition a region of interest (ROI) for texture acquisition. After preprocessing for detection and removal of reflective points, a set of 2D Gabor filter banks is used to extract and represent textural features. Then, we apply the Linear Discriminant Analysis (LDA) to identify the data sets from the tongue image database. The experimental results reasonably demonstrate the effectiveness of the method described in this paper.
Eye movement analysis provides a new way for disease screening, quantification and assessment. In order to track and analyze eye movement scanpaths under different conditions, this paper proposed the Gaussian mixture-Hidden Markov Model (G-HMM) modeling the eye movement scanpath during saccade, combing with the Time-Shifting Segmentation (TSS) method for model optimization, and also the Linear Discriminant Analysis (LDA) method was utilized to perform the recognition and evaluation tasks based on the multi-dimensional features. In the experiments, 800 real scene images of eye-movement sequences datasets were used, and the experimental results show that the G-HMM method has high specificity for free searching tasks and high sensitivity for prompt object search tasks, while TSS can strengthen the difference of eye movement characteristics, which is conducive to eye movement pattern recognition, especially for search tasks.
Following the Basel II Accord, with the increased focus on operational risk as an aspect distinct from credit and market risk, quantification of operational risk has been a major challenge for banks. This paper analyzes implications of the advanced measurement approach to estimate the operational risk. When modeling the severity of losses in a realistic manner, our preliminary tests indicate that classic distributions are unable to fit the entire range of operational risk data samples (collected from public information sources) well. Then, we propose a piecewise-defined severity distribution (PSD) that combines a parameter form for ordinary losses and a generalized Pareto distribution (GPD) for large losses, and estimate operational risk by the loss distribution approach (LDA) with Monte Carlo simulation. We compare the operational risk measured with piecewise-defined severity distribution based LDA (PSD-LDA) with those obtained from the basic indicator approach (BIA), and the ratios of operational risk regulatory capital of some major international banks with those of Chinese commercial banks. The empirical results reveal the rationality and promise of application of the PSD-LDA for Chinese national commercial banks.
As social media platforms have gained huge momentum in recent years, the amount of information generated from the social media sites is growing exponentially and gives the information retrieval systems a great challenge to extract the potential named entities. Researchers have utilized the semantic annotation mechanism to retrieve the entities from the unstructured documents, but the mechanism returns with too many ambiguous entities. In this work, the DBpedia knowledge base is adopted for entity extraction and categorization. To achieve the entity extraction task precisely, a two-step process is proposed: (a) train the unstructured datasets with Word2Vec and classify the entities into their respective categories. (b) crawl the web pages, forums, and other web sources to identifying the entities that are not present in the DBpedia. The evaluation shows the results with more precision and promising F1 score.
An analysis of the Electroencephalogram (EEG) signals while performing a monotonous task and drinking alcohol using principal component analysis (PCA), linear discriminant analysis (LDA) for feature extraction and Neural Networks (NNs) for classification is proposed. The EEG is captured while performing a monotonous task that can adversely affect the brain and possibly cause stress. Moreover, we investigate the effects of alcohol on the brain by capturing the data continuously after consumption of equal amounts of alcohol. We hope that our work will shed more light on the relationship between such actions and EEG, and investigate if there is any relation between the tasks and mental stress. EEG signals offers a rare look at brain activity, while, monotonous activities are well known to cause irritation which may contribute to mental stress. We apply PCA and LDA to characterize the change in each component, extract it and discriminate using a NN. After experiments, it was found that PCA and LDA are effective analysis methods in EEG signal analysis.
Our investigation aims at pre-training clustering models to summarize Vietnamese texts. For this purpose, we create a large-scale dataset by collecting Vietnamese articles from newspaper websites and extracting the plain text to build the dataset, including 1,101,101 documents. We propose a new single-document extractive text summarization model based on clustering models. Our proposal clusters the documents with the hard clustering k-means algorithm and the soft clustering LDA (Latent Dirichlet Allocation) algorithm. Then, based on the pre-training clustering models, a summary model is used to select the salient sentence in the input text to construct the summary. The empirical results showed that our summary model achieved 51.22% ROUGE-1, 17.62% ROUGE-2 and 29.16% ROUGE-L on the testing set. Besides the traditional word representation such as BoW (Bag-of-Words), we also use the word meaning-based tools like FastText and BERT (Bidirectional Encoder Representations from Transformers) in our model. The additional benefit of our proposed extractive summary model is that the output summary is a long-text, readable document. Furthermore, the model’s architecture is straightforward, easy to understand and runs on cost-efficient resources like arm CPU and GPU too.
Traditional journal analyses of topic trends in IS journals have manually coded target articles from chosen time periods. However, some research efforts have been made to apply automatic bibliometric approaches, such as cluster analysis and probabilistic models, to find topics in academic articles in other research areas. The purpose of this study is thus to investigate research topic trends in Engineering Management from 1998 through 2017 using an LDA analysis model. By investigating topics in EM journals, we provide partial but meaningful trends in EM research topics. The trend analysis shows that there are hot topics with increasing numbers of articles, steady topics that remain constant, and cold topics with decreasing numbers of articles.
Microblogging platforms like Twitter, in the recent years, have become one of the important sources of information for a wide spectrum of users. As a result, these platforms have become great resources to provide support for emergency management. During any crisis, it is necessary to sieve through a huge amount of social media texts within a short span of time to extract meaningful information from them. Extraction of emergency-specific information, such as topic keywords or landmarks or geo-locations of sites, from these texts plays a significant role in building an application for emergency management. This paper thus highlights different aspects of automatic analysis of tweets to help in developing such an application. Hence, it focuses on: (1) identification of crisis-related tweets using machine learning, (2) exploration of topic model implementations and looking at its effectiveness on short messages (as short as 140 characters); and performing an exploratory data analysis on short texts related to crises collected from Twitter, and looking at different visualizations to understand the commonality and differences between topics and different crisis-related data, and (3) providing a proof of concept for identifying and retrieving different geo-locations from tweets and extracting the GPS coordinates from this data to approximately plot them in a map.
Microblogging platforms like Twitter, in the recent years, have become one of the important sources of information for a wide spectrum of users. As a result, these platforms have become great resources to provide support for emergency management. During any crisis, it is necessary to sieve through a huge amount of social media texts within a short span of time to extract meaningful information from them. Extraction of emergency-specific information, such as topic keywords or landmarks or geo-locations of sites, from these texts plays a significant role in building an application for emergency management. This paper thus highlights different aspects of automatic analysis of tweets to help in developing such an application. Hence, it focuses on: (1) identification of crisis-related tweets using machine learning, (2) exploration of topic model implementations and looking at its effectiveness on short messages (as short as 140 characters); and performing an exploratory data analysis on short texts related to crises collected from Twitter, and looking at different visualizations to understand the commonality and differences between topics and different crisis-related data, and (3) providing a proof of concept for identifying and retrieving different geo-locations from tweets and extracting the GPS coordinates from this data to approximately plot them in a map.
Negative online reviews have become essential decision-making information for businesses. By conducting text mining on negative online reviews of e-commerce platforms to accurately identify problems in online platform transactions, using social network analysis to clarify the correlation between critical factors in negative reviews, and applying the LDA topic model to mine eight significant themes of negative reviews, namely platform rider disputes, education refund difficulties, difficulty in canceling or changing reservations, damage or loss of goods, taxi disputes, payment harassment complaints, platform member disputes, and slow response on customer service. This chapter is of great significance for improving the quality of products and services, enhancing customer satisfaction, and effectively regulating e-commerce platforms by the government.
This chapter deals with the calibration of a new simplified experimental method to evaluate absolute roughness of vegetated channels. The method is based on boundary layer measurements in a short channel rather than on uniform flow measurements, as usual. The proposed method can be applied to any kind of rough bed, but it is particularly useful in vegetated beds where long channels are difficult to prepare. In this paper a calibration coefficient is experimentally obtained. In order to perform suitable comparisons with literature data relationships between ε absolute roughness and Manning's n coefficient are deepened. The results are successfully compared with literature experimental data with a very good fit. Finally, a particular dependence of ε values on the vegetation density are explained through further experiences. In conclusion it is possible to state that the proposed method, once calibrated, can provide reliable prediction of absolute roughness in vegetated channels.
Classification techniques are routinely utilized on satellite images. Pansharpening techniques can be used to provide super resolved multispectral images that can improve the performance of classification methods. So far, these pansharpening methods have been explored only as a preprocessing step. In this work we address the problem of adaptively modifying the pansharpening method in order to improve the precision and recall figures of merit of the classification of a given class without significantly deteriorating the performance of the classifier over the other classes. The validity of the proposed technique is demonstrated using a real Quickbird image.
This paper research a method that can confirm the software evolution based on Latent Dirichlet Allocation (LDA). LDA is a method that can analyze the interdependency among words, topics and documents, and the interdependency can be expressed as probability. In this paper, adoption of LDA to modeling software evolution, take the package in source code as a document, regard names of function (method), variable names and comments as words, and figure out the probability between the three. Take results compare with update reports, can confirm the software of new version consistent with update reports.