Lycium barbarum polysaccharides (LBP) are the major ingredients of wolfberry. In this study, we investigated the role of LBP in endothelial dysfunction induced by oxidative stress and the underlying mechanisms using thoracic aortic endothelial cells of rat (RAECs) as a model. We found that Ang II inhibits cell viability of RAECs with 10−6−6mol/L of Ang II treatment for 24h most potential (P<0.05), the level of reactive oxygen species (ROS) is increased by Ang II treatment (P<0.01), and the expression of Occludin and Zonula occludens-1 (ZO-1) is decreased by Ang II treatment (P<0.05). However, preincubation of cells with LBP could inhibit the changes caused by Ang II, LBP increased cell viability (P<0.05), decreased the level of ROS (P<0.01), and up-regulated the expression of Occludin (P<0.05) and ZO-1. In addition, Ang II treatment increased the expression of EGFR and p-EGFR (Try1172) and which can be inhibited by LBP. On the contrary, expression of ErbB2, p-ErbB2 (Try1248), PI3K, p-e-NOS (Ser1177) (P<0.05), and p-AKT (Ser473) (P<0.05) was inhibited by Ang II treatment and which can be increased by LBP. Treatment of the cells with inhibitors showed that the regulation of p-e-NOS and p-AKT expression by Ang II and LBP can be blocked by PI3K inhibitor wortmannin but not EGFR and ErbB2 inhibitor AC480. Taken together, our results suggested that LBP plays a critical role in maintaining the integrality of blood vessel endothelium through reduced production of ROS via regulating the activity of EGFR, ErbB2, PI3K/AKT/e-NOS, and which may offer a novel therapeutic option in the management of endothelial dysfunction.
Texture classification is one of the important fields in pattern recognition and machine vision research. LBP method,13–15 proposed by Ojala, can be used to classify texture images effectively. And the LBP method has rotation-invariant, illumination-invariant, multi-resolution characteristics. But, since the contrast is not considered between neighbor pixels, the correct classification rate produced by this method has been remarkably influenced by light source type and light source orientation. The LMLCP (Local Multiple Layer Contrast Pattern) method, proposed by this paper, maps the contrast value between two near pixels to a rank value, which represent a relative contrast value range, and computes the statistic histogram referring to the work in LBP method. The LMLCP method can bring out the rapid expansion of feature dimension, so a special feature encoding method used in 3DLBP6 is adopted by this paper. The experiment, which is built based on Outex_TC_00012,12 demonstrates that the LMLCP can evidently make a more accurate classification rate than LBP method.
Facial expression recognition is one of the most challenging research areas in the image recognition field and has been actively studied since the 70's. For instance, smile recognition has been studied due to the fact that it is considered an important facial expression in human communication, it is therefore likely useful for human–machine interaction. Moreover, if a smile can be detected and also its intensity estimated, it will raise the possibility of new applications in the future. We are talking about quantifying the emotion at low computation cost and high accuracy. For this aim, we have used a new support vector machine (SVM)-based approach that integrates a weighted combination of local binary patterns (LBPs)-and principal component analysis (PCA)-based approaches. Furthermore, we construct this smile detector considering the evolution of the emotion along its natural life cycle. As a consequence, we achieved both low computation cost and high performance with video sequences.
Some regions (or blocks) and their affiliated features of face images are normally of more importance for face recognition. However, the variety of feature contributions, which exerts different saliency on recognition, is usually ignored. This paper proposes a new sparse facial feature description model based on salience evaluation of regions and features, which not only considers the contributions of different face regions, but also distinguishes that of different features in the same region. Specifically, the structured sparse learning scheme is employed as the salience evaluation method to encourage sparsity at both the group and individual levels for balancing regions and features. Therefore, the new facial feature description model is obtained by combining the salience evaluation method with region-based features. Experimental results show that the proposed model achieves better performance with much lower feature dimensionality.
Race identification is an essential ability for human eyes. Race classification by machine based on face image can be used in some practical application fields. Employing holistic face analysis, local feature extraction and 3D model, many race classification methods have been introduced. In this paper, we propose a novel fusion feature based on periocular region features for classifying East Asian from Caucasian. With the periocular region landmarks, we extract five local textures or geometrical features in some interesting regions which contain available discriminating race information. And then, these effective features are fused into a remarkable feature by Adaboost training. On the composed OFD-FERET face database, our method gets perfect performance on average accuracy rate. Meanwhile, we do a plenty of additional experiments to discuss the effect on the performance caused by gender, landmark detection, glasses and image size.
In this paper, a novel facial-patch based recognition framework is proposed to deal with the problem of face recognition (FR) on the serious illumination condition. First, a novel lighting equilibrium distribution maps (LEDM) for illumination normalization is proposed. In LEDM, an image is analyzed in logarithm domain with wavelet transform, and the approximation coefficients of the image are mapped according to a reference-illumination map in order to normalize the distribution of illumination energy due to different lighting effects. Meanwhile, the detail coefficients are enhanced to achieve detail information emphasis. The LEDM is obtained by blurring the distances between the test image and the reference illumination map in the logarithm domain, which may express the entire distribution of illumination variations. Then, a facial-patch based framework and a credit degree based facial patches synthesizing algorithm are proposed. Each normalized face images is divided into several stacked patches. And, all patches are individually classified, then each patch from the test image casts a vote toward the parent image classification. A novel credit degree map is established based on the LEDM, which is deciding a credit degree for each facial patch. The main idea of credit degree map construction is the over-and under-illuminated regions should be assigned lower credit degree than well-illuminated regions. Finally, results are obtained by the credit degree based facial patches synthesizing. The proposed method provides state-of-the-art performance on three data sets that are widely used for testing FR under different illumination conditions: Extended Yale-B, CAS-PEAL-R1, and CMUPIE. Experimental results show that our FR frame outperforms several existing illumination compensation methods.
Computing performance is one of the key problems in embedded systems for high-resolution face detection applications. To improve the computing performance of embedded high-resolution face detection systems, a novel parallel implementation of embedded face detection system was established based on a low power CPU-Accelerator heterogeneous many-core architecture. First, a basic CPU version of face detection prototype was implemented based on the cascade classifier and Local Binary Patterns operator. Second, the prototype was extended to a specified embedded parallel computing platform that is called Parallella and consists of Xilinx Zynq and Adapteva Epiphany. Third, the face detection algorithm was optimized to adapt to the Parallella architecture to improve the detection speed and the utilization of computing resources. Finally, a face detection experiment was conducted to evaluate the computing performance of the proposal in this paper. The experimental results show that the proposed implementation obtained a very consistent accuracy as that of the dual-core ARM, and achieved 7.8 times speedup than that of the dual-core ARM. Experiment results prove that the proposed implementation has significant advantages on computing performance.
Facial expressions recognition is a crucial task in pattern recognition and it becomes even crucial when cross-cultural emotions are encountered. Various studies in the past have shown that all the facial expressions are not innate and universal, but many of them are learned and culture-dependent. Extreme facial expression recognition methods employ different datasets for training and later use it for testing and demostrate high accuracy in recognition. Their performances degrade drastically when expression images are taken from different cultures. Moreover, there are many existing facial expression patterns which cannot be generated and used as training data in single training session. A facial expression recognition system can maintain its high accuracy and robustness globally and for a longer period if the system possesses the ability to learn incrementally. We also propose a novel classification algorithm for multinomial classification problems. It is an efficient classifier and can be a good choice for base classifier in real-time applications. We propose a facial expression recognition system that can learn incrementally. We use Local Binary Pattern (LBP) features to represent the expression space. The performance of the system is tested on static images from six different databases containing expressions from various cultures. The experiments using the incremental learning classification demonstrate promising results.
This paper aims to estimate the prevalence rates of MRI change in LBP out-patients and to determine the relationship between abnormalities in an MRI and personal and occupational factors. The MRI records were obtained from 200 out-patients with LBP (114 males and 86 females) who received a diagnostic MRI at St. Luke's Medical Center. The mean and standard deviation of this sample's age were 43.8 years and 14.8 years, respectively. Based on the MRI, each lumbar disc was scored as normal or degenerated. Bulging and herniated were also recorded. Each patient completed a short questionnaire that included the measures of height, weight, age, and present occupation and any history of "heavy manual labor". Occupations were grouped into white collar sedentary, white collar professional, blue collar exposed to prolonged sitting and vibration, blue collar exposed to heavy, unemployed or retired, and homemaker. Chi-square tests were used to determine the statistical significance of these trends. A multiple logistic regression was used to develop a predictive model of spine pathology based on a subject's individual characteristics and occupational classification.
Normal discs were found in 26% of the patients and degenerated discs in 47.5%. There were bulging/herniated disks in 26.5%. In men who were younger than 29 years, 50% had herniated disks, and 50% were normal. Three fourth of the women in the same age group showed normal discs. Forty-three percent of the subjects reported a history of performing heavy labor. Using the logistic regression model there were two variables predictive of observable MRI pathology: age and prior history of heavy labor. The analysis indicated that an older individual who had a history and heavy labor was more likely to show one or more pathological model discs in an MRI scan.
We investigate facial expression recognition (FER) based on image appearance. FER is performed using state-of-the-art classification approaches. Different approaches to preprocess face images are investigated. First, region-of-interest (ROI) images are obtained by extracting the facial ROI from raw images. FER of ROI images is used as the benchmark and compared with the FER of difference images. Difference images are obtained by computing the difference between the ROI images of neutral and peak facial expressions. FER is also evaluated for images which are obtained by applying the Local binary pattern (LBP) operator to ROI images. Further, we investigate different contrast enhancement operators to preprocess images, namely, histogram equalization (HE) approach and a brightness preserving approach for histogram equalization. The classification experiments are performed for a convolutional neural network (CNN) and a pre-trained deep learning model. All experiments are performed on three public face databases, namely, Cohn–Kanade (CK+), JAFFE and FACES.
Most of the documents use fingerprint impression for authentication. Property related documents, bank checks, application forms, etc., are the examples of such documents. Fingerprint-based document image retrieval system aims to provide a solution for searching and browsing of such digitized documents. The major challenges in implementing fingerprint-based document image retrieval are an efficient method for fingerprint detection and an effective feature extraction method. In this work, we propose a method for automatic detection of a fingerprint from given query document image employing Discrete Wavelet Transform (DWT)-based features and SVM classifier. In this paper, we also propose and investigate two feature extraction schemes, DWT and Stationary Wavelet Transform (SWT)-based Local Binary Pattern (LBP) features for fingerprint-based document image retrieval. The standardized Euclidean distance is employed for matching and ranking of the documents. Proposed method is tested on a database of 1200 document images and is also compared with current state-of-art. The proposed scheme provided 98.87% of detection accuracy and 73.08% of Mean Average Precision (MAP) for document image retrieval.
Computer-assisted colon cancer detection on the histopathological images has become a tedious task due to its shape characteristics and other biological properties. The images acquired through the histopathological microscope may vary in magnifications for better visibility. This may change the morphological properties and hence an automated magnification independent colon cancer detection system is essential. The manual diagnosis of colon biopsy images is subjective, sluggish, laborious leading to nonconformity between histopathologists due to visual evaluation at various microscopic magnifications. Automatic detection of colon across image magnifications is challenging due to many aspects like tailored segmentation and varying features. This demands techniques that take advantage of the textural, color, and geometric properties of colon tissue. This work exhibits a segmentation approach based on the morphological features derived from the segmented region. Gabor Wavelet, Harris Corner, and DWT-LBP coefficients are extracted as it should not be dependent on the spatial domain with respect to the magnification. These features are fed to the Genetically Optimized Neural Network classifier to classify them as normal and malignant ones. Here, the genetic algorithm is used to learn the best hyper-parameters for a neural network.
Today, manipulating, storing, and sending digital images are simple and easy because of the development of digital imaging devices from hardware and software points of view. Digital images are used in different contexts of people’s lives such as news, forensics, and so on. Therefore, the reliability of received images is a question that often occupies the viewer’s mind and the authenticity of digital images is increasingly important. Detecting a forged image as a genuine one as well as detecting a genuine image as a forged one can sometimes have irreparable consequences. For example, an image that is available from the scene of a crime can lead to a wrong decision if it is detected incorrectly. In this paper, we propose a combination method to improve the accuracy of copy–move forgery detection (CMFD) reducing the false positive rate (FPR) based on texture attributes. The proposed method uses a combination of the scale-invariant feature transform (SIFT) and local binary pattern (LBP). Consideration of texture features around the keypoints detected by the SIFT algorithm can be effective to reduce the incorrect matches and improve the accuracy of CMFD. In addition, to find more and better keypoints some pre-processing methods have been proposed. This study was evaluated on the COVERAGE, GRIP, and MICC-F220 databases. Experimental results show that the proposed method without clustering or segmentation and only with simple matching operations, has been able to earn the true positive rates of 98.75%, 95.45%, and 87% on the GRIP, MICC-F220, and COVERAGE datasets, respectively. Also, the proposed method, with FPRs from 17.75% to 3.75% on the GRIP dataset, has been able to achieve the best results.
Breast cancer is the leading cause of death in women. Early detection and early treatment can significantly reduce the breast cancer mortality. Texture features are widely used in classification problems, i.e., mainly for diagnostic purposes where the region of interest is delineated manually. It has not yet been considered for sonoelastographic segmentation. This paper proposes a method of segmenting the sonoelastographic breast images with optimum number of features from 32 features extracted from three different extraction methods: Gray Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Edge-Based Features. The image undergoes preprocessing by Sticks filter that improves the contrast and enhances the edges and emphasizes the tumor boundary. The features are extracted and then ranked according to the Sequential Forward Floating Selection (SFFS). The optimum number of ranked features is used for segmentation using k-means clustering. The segmented images are subjected to morphological processing that marks the tumor boundary. The overall accuracy is studied to investigate the effect of automated segmentation where the subset of first 10 ranked features provides an accuracy of 79%. The combined metric of overlap, over- and under-segmentation is 90%. The proposed work can also be considered for diagnostic purposes, along with the sonographic breast images.
Facial expression recognition is an interesting research direction of pattern recognition and computer vision. It has been increasingly used in artificial intelligence, human–computer interaction and security monitoring. In recent years, Convolution Neural Network (CNN) as a deep learning technique and multiple classifier combination method has been applied to gain accurate results in classifying face expressions. In this paper, we propose a multimodal classification approach based on a local texture descriptor representation and a combination of CNN to recognize facial expression. Initially, in order to reduce the influence of redundant information, the preprocessing stage is performed using face detection, face image cropping and texture descriptors of Local Binary Pattern (LBP), Local Gradient Code (LGC), Local Directional Pattern (LDP) and Gradient Direction Pattern (GDP) calculation. Second, we construct a cascade CNN architecture using the multimodal data of each descriptor (CNNLBP, CNNLGC, CNNGDP and CNNLDP) to extract facial features and classify emotions. Finally, we apply aggregation techniques (sum and product rule) for each modality to combine the four multimodal outputs and thus obtain the final decision of our system. Experimental results using CK+ and JAFFE database show that the proposed multimodal classification system achieves superior recognition performance compared to the existing studies with classification accuracy of 97, 93% and 94, 45%, respectively.
Detection of retinal abnormalities is an interesting problem that has attracted the attention of several studies. Although many works have been published on this topic, they are still ineffective when applied to real data from medical facilities due to conditions and constraints that lead to low efficiency. To address this problem, this paper proposes an effective model to detect signs of retinal diseases, including glaucoma, cataracts, and diabetic retinopathy. The proposed model EfficientNet-TL combines transfer learning techniques with the EfficientNet network architecture to improve accuracy. In addition, EfficientNet-TL also applies the histogram of oriented gradients (HOG) technique to extract information about the structure and shape of abnormal regions on fundus images and the LBP technique to extract image texture features and recognize small changes on the retinal surface. Experimental results on the image dataset showed that the proposed model EfficientNet-TL achieved high efficiency with an accuracy of up to 93%, which is superior to the previous models.
Lumbar disc diseases are the commonest complaint of Lower Back Pain (LBP). In this paper, a new method for automatic diagnosis of lumbar disc herniation is proposed which is based on clinical Magnetic Resonance Images (MRI) data. We use T2-W sagittal and myelograph images. Our method uses Otsu thresholding method to extract the spinal cord from MR images of Lumbar disc. In the next step, a third-order polynomial is aligned on the extracted spinal cords, and in the end of preprocessing step all the T2-W sagittal images are prepared for extracting disc boundary and labeling. After labeling and extracting a ROI for each disc, intensity and shape features are used for classification. The presented Method is applied on 30 clinical cases, each containing 7 discs (210 lumbar discs) for the herniation diagnosis. The results revealed 92.38% and 93.80% accuracy for Artificial Neural Network and Support Vector Machine (SVM) classifiers, respectively. The results indicate the superiority of the proposed method to those mentioned in similar studies.
Two significant problems in content based retrieval methods are (1) Accuracy: most of the current content based image retrieval methods have not been quantitatively compared nor benchmarked with respect to accuracy and (2) Efficiency: image database search methods must be analyzed for their computational efficiency and interrelationships. We assert that the accuracy problem is due to the generality of the applications involved. In the current systems, the goal of the user is not clear, which results in difficulties in creating ground truth. In this paper, we quantitatively compare and evaluate four fundamentally different methods for image copy location, namely, optimal keys, texture, projection, and template methods in a large portrait database. We discuss some important theoretical interrelationships, computational efficiency, and accuracy with respect to real noise experiments.
Most of the human feelings are expressed through face and by seeing one's face, one can easily identify whether he is happy or sad or angry. So for truly knowing the feeling behind words, facial expression must be correctly recognized. This paper gives a brief overview of the facial expression recognition system, discusses few approaches with different strategies, available datasets and the classifiers commonly used. We have used histogram of oriented gradients features for facial expression and the results are promising.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.