Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Majority of individuals are experiencing recurrent issues due to CAD, necessitating the development of an algorithm to accurately predict cardiac arrest onset. So, this paper proposes a heart disease prediction (HDP) model utilizing a deep learning modified neural network (EPHP-DLMNN). The solution makes use of the universal, accessible heart disease dataset, which contains patient-provided data on heart disease obtained via IoT sensors. In the preprocessing phase, adaptive median filter (SAMF) and fixed weighted mean (FWM) filtering are applied to the input image. Feature extraction is performed using methods like local binary pattern (LBP), gray level co-occurrence matrix (GLCM), gray level run length method (GLRLM), and Haralick features. The optimal features are selected using a hybrid African vulture with egret swarm optimization (HAVESO), combining African vulture optimization (AVO) and ESO. The selected features are classified by a modified deep bi-gated recurrent neural network (MDBi-GRNN), which is enhanced using a Bernoulli distribution function. The output is refined with the improved Beluga whale optimizer (IBWO) optimized by the butterfly optimization algorithm (BOA). The MDBi-GRNN model, implemented in Python, achieves superior performance with an accuracy of 0.984 and precision of 0.976 across various metrics including specificity, sensitivity, F-score, kappa value, and execution time.
Content-Based Image Retrieval (CBIR) is a broad research field in the current digital world. This paper focuses on content-based image retrieval based on visual properties, consisting of high-level semantic information. The variation between low-level and high-level features is identified as a semantic gap. The semantic gap is the biggest problem in CBIR. The visual characteristics are extracted from low-level features such as color, texture and shape. The low-level feature increases CBIRs performance level. The paper mainly focuses on an image retrieval system called combined color (TriCLR) (RGB, YCbCr, and L∗a∗b∗) with the histogram of texture features in LBP (HistLBP), which, is known as a hybrid of three colors (TriCLR) with Histogram of LBP (TriCLR and HistLBP). The study also discusses the hybrid method in light of low-level features. Finally, the hybrid approach uses the (TriCLR and HistLBP) algorithm, which provides a new solution to the CBIR system that is better than the existing methods.
The feature extraction and classification of brain signal is very significant in brain–computer interface (BCI). In this study, we describe an algorithm for motor imagery (MI) classification of electrocorticogram (ECoG)-based BCI. The proposed approach employs multi-resolution fractal measures and local binary pattern (LBP) operators to form a combined feature for characterizing an ECoG epoch recording from the right hemisphere of the brain. A classifier is trained by using the gradient boosting in conjunction with ordinary least squares (OLS) method. The fractal intercept, lacunarity and LBP features are extracted to classify imagined movements of either the left small finger or the tongue. Experimental results on dataset I of BCI competition III demonstrate the superior performance of our method. The cross-validation accuracy and accuracy is 90.6% and 95%, respectively. Furthermore, the low computational burden of this method makes it a promising candidate for real-time BCI systems.
The automatic identification of epileptic electroencephalogram (EEG) signals can give assistance to doctors in diagnosis of epilepsy, and provide the higher security and quality of life for people with epilepsy. Feature extraction of EEG signals determines the performance of the whole recognition system. In this paper, a novel method using the local binary pattern (LBP) based on the wavelet transform (WT) is proposed to characterize the behavior of EEG activities. First, the WT is employed for time–frequency decomposition of EEG signals. After that, the “uniform” LBP operator is carried out on the wavelet-based time–frequency representation. And the generated histogram is regarded as EEG feature vector for the quantification of the textural information of its wavelet coefficients. The LBP features coupled with the support vector machine (SVM) classifier can yield the satisfactory recognition accuracies of 98.88% for interictal and ictal EEG classification and 98.92% for normal, interictal and ictal EEG classification on the publicly available EEG dataset. Moreover, the numerical results on another large size EEG dataset demonstrate that the proposed method can also effectively detect seizure events from multi-channel raw EEG data. Compared with the standard LBP, the “uniform” LBP can obtain the much shorter histogram which greatly reduces the computational burden of classification and enables it to detect ictal EEG signals in real time.
Imbalance data classification is a challenging task in automatic seizure detection from electroencephalogram (EEG) recordings when the durations of non-seizure periods are much longer than those of seizure activities. An imbalanced learning model is proposed in this paper to improve the identification of seizure events in long-term EEG signals. To better represent the underlying microstructure distributions of EEG signals while preserving the non-stationary nature, discrete wavelet transform (DWT) and uniform 1D-LBP feature extraction procedure are introduced. A learning framework is then designed by the ensemble of weakly trained support vector machines (SVMs). Under-sampling is employed to split the imbalanced seizure and non-seizure samples into multiple balanced subsets where each of them is utilized to train an individual SVM classifier. The weak SVMs are incorporated to build a strong classifier which emphasizes seizure samples and in the meantime analyzing the imbalanced class distribution of EEG data. Final seizure detection results are obtained in a multi-level decision fusion process by considering temporal and frequency factors. The model was validated over two long-term and one short-term public EEG databases. The model achieved a G-mean of 97.14% with respect to epoch-level assessment, an event-level sensitivity of 96.67%, and a false detection rate of 0.86/h on the long-term intracranial database. An epoch-level G-mean of 95.28% and event-level false detection rate of 0.81/h were yielded over the long-term scalp database. The comparisons with 14 published methods demonstrated the improved detection performance for imbalanced EEG signals and the generalizability of the proposed model.
A novel object tracking algorithm is presented in this paper by using the joint color-texture histogram to represent a target and then applying it to the mean shift framework. Apart from the conventional color histogram features, the texture features of the object are also extracted by using the local binary pattern (LBP) technique to represent the object. The major uniform LBP patterns are exploited to form a mask for joint color-texture feature selection. Compared with the traditional color histogram based algorithms that use the whole target region for tracking, the proposed algorithm extracts effectively the edge and corner features in the target region, which characterize better and represent more robustly the target. The experimental results validate that the proposed method improves greatly the tracking accuracy and efficiency with fewer mean shift iterations than standard mean shift tracking. It can robustly track the target under complex scenes, such as similar target and background appearance, on which the traditional color based schemes may fail to track.
Facial expression recognition has been researched much in recent years because of their applications in intelligent communication systems. Many methods have been developed based on extracting Local Binary Pattern (LBP) features associating different classifying techniques in order to get more and more better effects of facial expression recognition. In this work, we propose a novel method for recognizing facial expressions based on Local Binary Pattern features and Support Vector Machine with two effective improvements. First is the preprocessing step and second is the method of dividing face images into nonoverlap square regions for extracting LBP features. The method was experimented on three typical kinds of database: small (213 images), medium (2040 images) and large (5130 images). Experimental results show the effectiveness of our method for obtaining remarkably better recognition rate in comparison with other methods.
A new action model is proposed, by revisiting local binary patterns (LBP) for dynamic texture models, applied on trajectory beams calculated on the video. The use of semi-dense trajectory field allows to dramatically reduce the computation support to essential motion information, while maintaining a large amount of data to ensure robustness of statistical bag of features action models. A new binary pattern, called Spatial Motion Pattern (SMP) is proposed, which captures self-similarity of velocity around each tracked point (particle), along its trajectory. This operator highlights the geometric shape of rigid parts of moving objects in a video sequence. SMPs are combined with basic velocity information to form the local action primitives. Then, a global representation of a space × time video block is provided by using hierarchical blockwise histograms, which allows to efficiently represent the action as a whole, while preserving a certain level of spatiotemporal relation between the action primitives. Inheriting from the efficiency and the invariance properties of both the semi-dense tracker Video extruder and the LBP-based representations, the method is designed for the fast computation of action descriptors in unconstrained videos. For improving both robustness and computation time in the case of high definition video, we also present an enhanced version of the semi-dense tracker based on the so-called super particles, which reduces the number of trajectories while improving their length, reliability and spatial distribution.
In computer vision, Local Binary Pattern (LBP) and Scale Invariant Feature Transform (SIFT) are two widely used local descriptors. In this paper, we propose to combine them effectively for scene categorization. First, LBP and SIFT features are regularly extracted from training images for constructing a LBP feature codebook and a SIFT feature codebook. Then, a two-dimensional table is created by combining the obtained codebooks. For creating a representation for an image, LBP and SIFT features extracted from the same positions of the image are encoded together based on sparse coding by using the two-dimensional table. After processing all features in the input image, we adopt spatial max pooling to determine its representation. Obtained image representations are forwarded to a Support Vector Machine classifier for categorization. In addition, in order to improve the scene categorization performance further, we propose a method to select correlated visual words from large codebooks for constructing the two-dimensional table. Finally, for evaluating the proposed method, extensive experiments are implemented on datasets Scene Categories 8, Scene Categories 15 and MIT 67 Indoor Scene. It is demonstrated that the proposed method is effective for scene categorization.
In this paper, we propose a gray-scale texture descriptor, name the global and local oriented edge magnitude patterns (GLOEMP), for texture classification. GLOEMP is a framework, which is able to effectively combine local texture, global structure information and contrast of texture images. In GLOEMP, the principal orientation is determined by Histogram of Gradient (HOG) feature, then each direction is respectively shown in detail by a local binary patterns (LBP) occurrence histogram. Due to the fact that GLOEMP characterizes image information across different directions, it contains very abundant information. The global-level rotation compensation method is proposed, which shifts the principal orientation of the HOG to the first position, thus allowing GLOEMP to be robust to rotations. In addition, gradient magnitudes are used as weights to add to the histogram, making GLOEMP robust to lighting variances as well, and it also possesses a strong ability to express edge information. The experimental results obtained from the representative databases demonstrate that the proposed GLOEMP framework is capable of achieving significant improvement, in some cases reaching classification accuracy of 10% higher than over the traditional rotation invariant LBP method.
A visual secret sharing (VSS) scheme is intended to share secret information in a group to avoid potential treat of interruption and modification. In this paper, we present a novel VSS scheme based on the improved local binary pattern (LBP) operator. It makes full use of local contrast features of LBP for concealing secret image data into different image shares, which can be used to recover the secret easily and exactly. By varying LBP extensions, we can design various kinds of VSS schemes for sharing secret information. Compared to the currently available VSS algorithms, the proposed scheme demonstrates better randomness in shares with less pixel expansion and exact determination in reconstruction with lower computational cost.
In this paper, a novel feature extraction method based on an improved color local binary pattern (LBP) is proposed for color face recognition. Firstly, in a given neighborhood of every pixel, we choose some sampling points from three color channels simultaneously and the numbers of the sampling points from every channel may be different. Secondly, we use a new rule to select the threshold which does not always locate in the geometrical center of the given neighborhood. Thirdly, in order to excavate the potential of the proposed sampling method, we use the k-uniform LBP to obtain the binary code of each pixel. In addition, we embed the Hamming distance into our method for improving the recognition rate of the proposed method. For evaluating the performance of our method, we implement the proposed method and several related methods on five public face databases: FERET, CMU-PIE, Georgia, FEI and Asian databases. Experimental results show that our method possesses higher recognition rates and lower computational cost than other related color face recognition methods.
Support vector machine (SVM) is always used for face recognition. However, kernel function selection (kernel selection and its parameters selection) is a key problem for SVMs, and it is difficult. This paper tries to make some contributions to this problem with focus on optimizing the parameters in the selected kernel function. Bacterial foraging optimization algorithm, inspired by the social foraging behavior of Escherichia coli, has been widely accepted as a global optimization algorithm of current interest for distributed optimization and control. Therefore, we proposed to optimize the parameters in SVM by an improved bacterial foraging optimization algorithm (IBFOA). In the improved version of bacterial foraging optimization algorithm, a dynamical elimination-dispersal probability in the elimination-dispersal step and a dynamical step size in the chemotactic step are used to improve the performance of bacterial foraging optimization algorithm. Then the optimized SVM is used for face recognition. Simultaneously, an improved local binary pattern is proposed to extract features of face images in this paper to improve the accuracy rate of face recognition. Numerical results show the advantage of our algorithm over a range of existing algorithms.
Automated recognition and classification of fishes are useful for studies dealing with counting of fishes for population assessments, discovering association between fishes and ecosystem, and monitoring of the ecosystem. This paper proposes a model which classifies the fishes belonging to the family Labridae in the genus and the species level. Features computed in the spatial and frequency domains are used in this work. All the images are preprocessed before feature extraction. Preprocessing step involves image segmentation for background elimination, de-noising and image enhancement. A combination of color, local binary pattern (LBP), histogram of oriented gradients (HOG), and wavelet features forms the feature vector. An ensemble feature reduction technique is used to reduce the attribute size. Performances of the system using combined as well as reduced feature sets are evaluated using seven popular classifiers. Among the classifiers, wavelet kernel extreme learning machine (ELM) showed higher classification accuracy of 96.65% in genus level and polynomial kernel ELM showed an accuracy of 92.42% in species level with the reduced feature set.
Face recognition has been extensively studied by many scholars in the recent decades. Local binary pattern (LBP) is one of the most popular local descriptors and has been widely applied to face recognition. Wavelet transform is also more and more active in the field of pattern recognition. In this paper, a novel feature extraction method is proposed to overcome illumination influence. First, a given face image is processed by the LBP operator, and an LBP image is obtained. Second, wavelet transform is used to extract discriminant feature from the LBP image. The experiment results on LFW, Extended YaleB and CMU-PIE face databases show that the proposed method outperforms several popular face recognition methods, and the preprocessing step plays an important role to extract effective features for classification.
In this paper, we propose two self-adapting patch strategies, which are obtained by employing the integral projection technique on images’ edge images, while the edge images are recovered by the two-dimensional discrete wavelet transform. The patch strategies are equipped with the advantage of considering the single image’s unique properties and maintaining the integrity of some particular local information. Combining the self-adapting patch strategies with local binary pattern feature extraction and the classifier of the forward and backward greedy algorithms under strong sparse constraint, we propose two new face recognition methods. Experiments are run on the Georgia Tech, LFW and AR face databases. The obtained numerical results show that the new methods outperform some related patch-based methods to a larger extent.
Automatic Facial Expression Recognition (FER) has become essential today as it has many applications in real time such as animation, driver mood detection, lie detection, and clinical psychology. The effectiveness of FER systems mainly depends on the extracted features. For extracting distinctive features with low dimensions, a new local texture-based image descriptor named Dimensionality Reduced Chess Pattern (DRCP) is proposed for recognizing facial expressions in a person independent scenario. DRCP, an improvement over Chess Pattern (CP), is mainly proposed for effectively reducing the feature vector length of CP. For feature extraction, DRCP also considers the movements of chessmen in a 5×5 neighborhood, as like CP. As a part of feature extraction through DRCP, apart from the center pixel, the remaining 24 pixels are arranged into four groups in such a manner that each group contains the pixels corresponding to three chessmen. From each group, one feature is extracted and thus corresponding to four groups, four features are extracted in a 5×5 neighborhood. The extracted features are fed into multi-class Support Vector Machine (SVM) for expression recognition. The experiments are performed on five “in the lab” datasets (MUG, TFEID, JAFFE, CK+ and KDEF) and on two “in the wild” datasets (RAF and SFEW) in person independent setup to simulate a real world scenario.
In this paper, a system based on image descriptor and Local Histogram Concatenation (LHC) for finger vein recognition is introduced. The LHC of image descriptors such as LBP, LDP CLBP cannot be inverted back to the original images, therefore they can provide good security if stored as enrolled data. On the other hand, the technique of LHC does not depict spatial information, therefore it is expected to be less sensitive to image misalignment if a measure such as the histogram difference (dX2) is used for recognition. The use of histogram difference makes the system more robust to misalignment compared to the pixel-by-pixel-based measures such as the Hamming Distance (HD). The approach of LHC is implemented by dividing the image descriptor into non-overlapped grids, then the histogram within each grid is calculated and concatenated with the histograms of the preceding grids and finally, the concatenated histograms of each two images are compared using dX2 measure. Two datasets, UTFVP and SDUMLA-HMT, are used for testing the performance of the system. The results have shown that the Identification Recognition Rate (IRR) is improved when LHCs of the image descriptors with (dX2) measure are used compared to the use of only the image descriptors with HD measure. For UTFVP dataset, the IRR values were 97.44%, 95% and 98.37% when LHC and (dX2) were used with LBP, LDP and CLBP, respectively, while these values were 89.44%, 92.63% and 92.92% when only LBP, LDP and CLBP with HD were used. For SDUMLA-HMT dataset, the IRR values of the system were 98.43%, 98.69% and 98.85% when LHC and (dX2) were used with LBP, LDP and CLBP, respectively, while these values were 97.6%, 98.24% and 97.27% when only the image descriptors LBP, LDP and CLBP with HD were used.
Cybercriminals motivated by malign purpose and financial gain are rapidly developing new variants of sophisticated malware using automated tools, and most of these malware target Windows operating systems. This serious threat demands efficient techniques to analyze and detect zero-day, polymorphic and metamorphic malware. This paper introduces two frameworks for Windows malware detection using random forest algorithms. The first scheme uses features obtained from static and dynamic analysis for training, and the second scheme uses features obtained from static, dynamic, malware image analysis, location-sensitive hashing and file format inspections. We carried out an extensive experiment on two feature sets, and the proposed schemes are evaluated using seven standard evaluation metrics. The experiment results demonstrate that the second scheme recognizes unseen malware better than the first scheme and three state-of-the-art works. The findings show that the second scheme’s multi-view feature set contributes to its 99.58% accuracy and lowers false positive rate of 0.54%.
This paper investigates and compares the performance of local descriptors for race classification from face images. Two powerful types of local descriptors have been considered in this study: Local Binary Patterns (LBP) and Weber Local Descriptors (WLD). First, we investigate the performance of LBP and WLD separately and experiment with different parameter values to optimize race classification. Second, we apply the Kruskal-Wallis feature selection algorithm to select a subset of more "discriminative" bins from the LBP and WLD histograms. Finally, we fuse LBP and WLD, both at the feature and score levels, to further improve race classification accuracy. For classification, we have considered the minimum distance classifier and experimented with three distance measures: City-block, Euclidean, and Chi-square. We have performed extensive experiments and comparisons using five race groups from the FERET database. Our experimental results indicate that (i) using the Kruskal-Wallis feature selection, (ii) fusing LBP with WLD at the feature level, and (iii) using the City-block distance for classification, outperforms LBP and WLD alone as well as methods based on holistic features such as Principal Component Analysis (PCA) and LBP or WLD (i.e., applied globally).