Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    WAVELET ANALYSIS OF LOW BACK SURFACE EMG SIGNALS SUBJECT TO UNEXPECTED LOAD

    This study reports a new technique for the analysis of electromyographic signals from the low back muscles. More specifically, the effect of unexpected load on a normal subject and a subject with chronic low back pain was determined and quantified using wavelet based analysis (Morlet wavelet). The analysis was performed using a Wavelet software system, subsequently referred to as PSCW. The system identified automatically, accurately, and in a uniquely reproducible manner the time response of the erector spinae muscle. The exact number of responses as well as their corresponding time and amplitude were determined and tabulated. It was observed that the initial reaction time for the normal subject was faster than the reaction time for the subject chronic low back pain. The importance of this observation may help in the understanding of the physiology of the neuromuscular system associated with low back spine disorders. It is believed that an occupational and clinical test based on this observation that could give an accurate assessment of the status of low back disorder could be designed. Based on this assessment a rehabilitation program could be developed with the objective of improving the condition of a spine disorder (decrease the initial response time) by muscle strengthening.

  • articleNo Access

    NANOSCALE FINFET SENSOR FOR DETERMINING THE BREAST CANCER TISSUES USING WAVELET COEFFICIENTS

    A noninvasive optical method for determining the optical properties of normal and cancerous breast tissues by interpolating wavelet approach using the characteristics of nanoscale FinFET sensor has been theoretically developed and presented in this paper. This novel approach classifies the normal and cancerous tissues of human breast by calculating the surface potential variations of nanoscale FinFET illuminated by laser source of different wavelengths. Using these surface potential variations, the optical properties of the tissues are determined. By using this method, the point-to-point variations in tissue composition and structural variations in healthy and diseased tissues could be identified. The results obtained are used to examine the performance of the device for its suitable use as a nanoscale sensor.

  • articleNo Access

    AUTOMATED GLAUCOMA DETECTION USING HYBRID FEATURE EXTRACTION IN RETINAL FUNDUS IMAGES

    Glaucoma is one of the most common causes of blindness. Robust mass screening may help to extend the symptom-free life for affected patients. To realize mass screening requires a cost-effective glaucoma detection method which integrates well with digital medical and administrative processes. To address these requirements, we propose a novel low cost automated glaucoma diagnosis system based on hybrid feature extraction from digital fundus images. The paper discusses a system for the automated identification of normal and glaucoma classes using higher order spectra (HOS), trace transform (TT), and discrete wavelet transform (DWT) features. The extracted features are fed to a support vector machine (SVM) classifier with linear, polynomial order 1, 2, 3 and radial basis function (RBF) in order to select the best kernel for automated decision making. In this work, the SVM classifier, with a polynomial order 2 kernel function, was able to identify glaucoma and normal images with an accuracy of 91.67%, and sensitivity and specificity of 90% and 93.33%, respectively. Furthermore, we propose a novel integrated index called Glaucoma Risk Index (GRI) which is composed from HOS, TT, and DWT features, to diagnose the unknown class using a single feature. We hope that this GRI will aid clinicians to make a faster glaucoma diagnosis during the mass screening of normal/glaucoma images.

  • articleNo Access

    GRAPH WAVELET ALIGNMENT KERNELS FOR DRUG VIRTUAL SCREENING

    In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure–activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.

  • articleOpen Access

    Targeted principle component analysis: A new motion artifact correction approach for near-infrared spectroscopy

    As near-infrared spectroscopy (NIRS) broadens its application area to different age and disease groups, motion artifacts in the NIRS signal due to subject movement is becoming an important challenge. Motion artifacts generally produce signal fluctuations that are larger than physiological NIRS signals, thus it is crucial to correct for them before obtaining an estimate of stimulus evoked hemodynamic responses. There are various methods for correction such as principle component analysis (PCA), wavelet-based filtering and spline interpolation. Here, we introduce a new approach to motion artifact correction, targeted principle component analysis (tPCA), which incorporates a PCA filter only on the segments of data identified as motion artifacts. It is expected that this will overcome the issues of filtering desired signals that plagues standard PCA filtering of entire data sets. We compared the new approach with the most effective motion artifact correction algorithms on a set of data acquired simultaneously with a collodion-fixed probe (low motion artifact content) and a standard Velcro probe (high motion artifact content). Our results show that tPCA gives statistically better results in recovering hemodynamic response function (HRF) as compared to wavelet-based filtering and spline interpolation for the Velcro probe. It results in a significant reduction in mean-squared error (MSE) and significant enhancement in Pearson's correlation coefficient to the true HRF. The collodion-fixed fiber probe with no motion correction performed better than the Velcro probe corrected for motion artifacts in terms of MSE and Pearson's correlation coefficient. Thus, if the experimental study permits, the use of a collodion-fixed fiber probe may be desirable. If the use of a collodion-fixed probe is not feasible, then we suggest the use of tPCA in the processing of motion artifact contaminated data.

  • articleOpen Access

    EXTRACTION OF CORONARY ARTERIAL TREE USING CINE X-RAY ANGIOGRAMS

    An efficient and robust method for identification of coronary arteries and evaluation of the severity of the stenosis on the routine X-ray angiograms is proposed. It is a challenging process to accurately identify coronary artery due to poor signal-to-noise ratio, vessel overlap, and superimposition with various anatomical structures such as ribs, spine, or heart chambers. The proposed method consists of two major stages: (a) signal-based image segmentation and (b) vessel feature extraction. The 3D Fourier and 3D Wavelet transforms are first employed to reduce the background and noisy structures in the images. Afterwards, a set of matched filters was applied to enhance the coronary arteries in the images. At the end, clustering analysis, histogram technique, and size filtering were utilized to obtain a binary image that consists of the final segmented coronary arterial tree. To extract vessel features in terms of vessel centerline and diameter, a gradient vector-flow based snake algorithm is applied to determine the medial axis of a vessel followed by the calculations of vessel boundaries and width associated with the detected medial axis.

  • articleNo Access

    RECOGNITION OF SLEEP STAGES BASED ON A COMBINED NEURAL NETWORK AND FUZZY SYSTEM USING WAVELET TRANSFORM FEATURES

    Recognition of sleep stages is an important task in the assessment of the quality of sleep. Several biomedical signals, such as EEG, ECG, EMG and EOG are used extensively to classify the stages of sleep, which is very important for the diagnosis of sleep disorders. Many sleep studies have been conducted that focused on the automatic classification of sleep stages. In this research, a new classification method is presented that uses an Elman neural network combined with fuzzy rules to extract sleep features based on wavelet decompositions. The nine subjects who participated in this study were recruited from Cheng-Ching General Hospital in Taichung, Taiwan. The sampling frequency was 250 Hz, and a single-channel (C3-A1) EEG signal was acquired for each subject. The system consisted of a combined neural network and fuzzy system that was used to recognize sleep stages based on epochs (10-second segments of data). The classification results relied on the strong points of combined neural network and fuzzy system, which achieved an average specificity of approximately 96% and an average accuracy of approximately 94%.

  • articleNo Access

    AN EFFICIENT RIPPLET-BASED SHRINKAGE TECHNIQUE FOR MR IMAGE RESTORATION

    In this paper a new ripplet-based shrinkage technique is used to suppress noise from Magnetic Resonance Imaging (MRI). The propitious properties of ripplet transform such as anisotropy, high directionality, good localization, and high-energy compaction make the proposed method efficient and feature preserving when compared to other transforms. Ripplet transform provides efficient representation of edges in images with a higher potential for image processing applications such as image restoration, compression, and de-noising. The proposed method implies a new nonlinear ripplet-based shrinkage technique to extract the spatial and frequency information from MRI corrupted by noise. The choice of this new shrinkage technique is due to its simplicity, versatility, and its efficiency in removing noise from homogenous regions and those regions with singularities, when compared to the existing filtering techniques. Experiments were conducted on several diffusion weighed images and anatomical images. The results show that the proposed de-noising technique shows competitive performance compared to the current state-of-art methods. Qualitative validation was performed based on several quality metrics and profound improvement over existing methods was obtained. Higher values of Peak Signal to Noise Ratio (PSNR), Correlation Coefficient (CC), mean structural similarity index (MSSIM), and lower values of Root Mean Square Error (RMSE) and computational time were obtained for the proposed ripplet-based shrinkage technique when compared to the existing ones.

  • articleNo Access

    DIAGNOSIS OF CARDIAC ABNORMALITY USING HEART SOUND

    Heart sound (HS) analysis or auscultation is a standout amongst the most simple, non-invasive and costless methods used to evaluate heart health and is one of the basic and foremost routine of a doctor while reviewing a patient. Detecting cardiac abnormality by auscultation demands a physician’s experience and even then there is a high scope of committing error. In this paper, a low cost electronic stethoscope is built to acquire HS in a novel manner by taking one from each ventricular and auricular area and superimposed, to get a resultant signal of both distinct lub-dub sound. Then, a light, fast and low computation speed beat track method followed by wavelet reconstruction is presented for correct detection of S1 and S2. It is done without ECG reference, and can be used satisfactorily on both normal and pathological HSs. Moreover, heartbeats can be identified in both de-noised and noised environment as it is independent of external disturbances. Significant features are extracted from the resultant HSs with detected S1 and S2 and feed-forward back propagation method. It is used to classify the HS nature into normal and pathological. This algorithm has been implemented on 24 pairs of HSs, extracted from 24 patients of 15 pathological and nine normal subjects and the classification yields a result of 91.7% accuracy with 81.8% sensitivity. The overall performance suggests a good performance to cost ratio. This system can be used as first diagnosis tool by the medical professionals.

  • articleNo Access

    IMPROVEMENT OF THE PERFORMANCE OF FINGERPRINT VERIFICATION USING A COMBINATORIAL APPROACH

    Fingerprint verification systems have attracted much attention in secure organizations; however, conventional methods still suffer from unconvincing recognition rate for noisy fingerprint images. To design a robust verification system, in this paper, wavelet and contourlet transforms (CTS) were suggested as efficient feature extraction techniques to elicit a coverall set of descriptive features to characterize fingerprint images. Contourlet coefficients capture the smooth contours of fingerprints while wavelet coefficients reveal its rough details. Due to the high dimensionality of the elicited features, across group variance (AGV), greedy overall relevancy (GOR) and Davis–Bouldin fast feature reduction (DB-FFR) methods were adopted to remove the redundant features. These features were applied to three different classifiers including Boosting Direct Linear Discriminant Analysis (BDLDA), Support Vector Machine (SVM) and Modified Nearest Neighbor (MNN). The proposed method along with state-of-the-art methods were evaluated, over the FVC2004 dataset, in terms of genuine acceptance rate (GAR), false acceptance rate (FAR) and equal error rate (EER). The features selected by AGV were the most significant ones and provided 95.12% GAR. Applying the selected features, by the GOR method, to the modified nearest neighbor, resulted in average EER of <1%, which outperformed the compared methods. The comparative results imply the statistical superiority (p<0.05) of the proposed approach compared to the counterparts.

  • articleNo Access

    MULTI-CHANNEL ECG-BASED STEGANOGRAPHY

    There is a growing tendency for the concealment of secure information into electrocardiogram (ECG) signals in a way that the embedded ECGs still remain diagnosable. The average length of ECG recording for a primary diagnosis takes no longer than 1min yielding to limit its concealment capacity. To overcome this drawback, we enhanced both concealment capacity and embedding quality by: (I) using 12-lead ECGs to span more embedding space, (II) shuffling input message bits via nonlinear feedback shift register (NLFSR) method, (III) inserting the selected bits of each channel into the high-frequency wavelet coefficients of non-QRS parts. Inserting the message bits into high frequency coefficients of less important ECG parts leads to preserve the quality watermarked ECGs. To assess the proposed method, a text containing different letters (changes with size of both non-QRS segments and high-frequency sub-band) was hidden through 12-lead ECG signals of 56 randomly selected subjects of PTB database, where each signal length is 10s. The performance was compared to state-of-the-art ECG-based steganography schemes in terms of the following criteria: percentage residual difference (PRD), peak signal to noise ratio (PSNR), structural similarity index measure (SSIM) and bit error rate (BER). Our results showed that the proposed scheme has benefits of fast computing along with secure embedding, providing high capacity of data hiding.

  • chapterNo Access

    EDGE PRESERVED DENOISING IN MAGNETIC RESONANCE IMAGES AND THEIR APPLICATIONS

    Edge-preserving image enhancement and noise removal are of great interest in medical imaging. This chapter describes schemes for noise suppression of magnetic resonance images using wavelet multiscale thresholding. To sufficiently exploit the wavelet interscale dependencies, we multiply the adjacent wavelet subbands of a Canny-edge-detector-like dyadic wavelet to form a multiscale product function where the significant features in images evolving with high magnitude across wavelet scales are amplified while noises are deteriorated, which facilitates an easy differentiation of edge structures from noise. Thereafter an adaptive threshold is calculated and imposed on the products, instead directly on the wavelet coefficients, to identify important features. Experiments show that the proposed scheme outperforms other wavelet-thresholding denoising methods in suppressing noise and preserving edges.