Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    A SEMI-LOCALIZED ELASTIC NET FOR SURFACE RECONSTRUCTION OF OBJECTS FROM MULTISLICE IMAGES

    The traveling salesman problem (TSP) is a prototypical problem of combinatorial optimization and, as such, it has received considerable attention from neural-network researchers seeking quick, heuristic solutions. An early stage in many computer vision tasks is the extraction of object shape from an image consisting of noisy candidate edge points. Since the desired shape will often be a closed contour, this problem can be viewed as a version of the TSP in which we wish to link only a subset of the points/cities (i.e. the "noisefree" ones). None of the extant neural techniques for solving the TSP can deal directly with this case. In this paper, we present a simple but effective modification to the (analog) elastic net of Durbin and Willshaw which shifts emphasis from global to local behavior during convergence, so allowing the net to ignore some image points. Unlike the original elastic net, this semi-localized version is shown to tolerate considerable amounts of noise. As an example practical application, we describe the extraction of "pseudo-3D" human lung outlines from multiple preprocessed magnetic resonance images of the torso. An effectiveness measure (ideally zero) quantifies the difference between the extracted shape and some idealized shape exemplar. Our method produces average effectiveness scores of 0.06 for lung shapes extracted from initial semi-automatic segmentations which define the noisefree case. This deteriorates to 0.1 when extraction is from a noisy edge-point image obtained fully-automatically using a feedforward neural network.

  • articleNo Access

    An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.

  • articleNo Access

    A HYBRID ELASTIC NET METHOD FOR SOLVING THE TRAVELING SALESMAN PROBLEM

    The purpose of this paper is to present a new hybrid Elastic Net (EN) algorithm, by integrating the ideas of the Self Organization Map (SOM) and the strategy of the gradient ascent into the EN algorithm. The new hybrid algorithm has two phases: an EN phase based on SOM and a gradient ascent phase. We acquired the EN phase based on SOM by analyzing the weight between a city and its converging and non-converging nodes at the limit when the EN algorithm produces a tour. Once the EN phase based on SOM stuck in local minima, the gradient ascent algorithm attempts to fill up the valley by modifying parameters in a gradient ascent direction of the energy function. These two phases are repeated until the EN gets out of local minima and produces the short or better tour through cities. We test the algorithm on a set of TSP. For all instances, the algorithm is showed to be capable of escaping from the EN local minima and producing more meaningful tour than the EN.

  • articleNo Access

    Influence Line Identification Method Based on VMD Combined with Improved Wavelet Threshold Denoising

    The influence line of bridge can reflect the performance of the bridge under the moving vehicle load. It has a wide range of applications in structural damage detection, performance evaluation, model correction and bridge weigh-in-motion. Fast moving vehicles will cause the dynamic effect of the bridge, resulting in the change of the bridge response curve and the difficulty of identifying the influence line. In this paper, an influence line identification method based on variational mode decomposition (VMD) combined with improved wavelet threshold denoising is proposed. Firstly, the mathematical model of influence line identification is established based on vehicle load matrix and bridge response. Then, VMD combined with improved wavelet threshold denoising method is used to eliminate dynamic fluctuation effect and noise. Finally, the elastic net penalty term is introduced in the influence line identification problem, and the B-spline basis function is used to reconstruct the influence line. In order to verify the effectiveness of the proposed method, a vehicle-bridge coupling model is established for numerical calculation, and the identification of bridge displacement influence lines under different conditions such as vehicle speed, noise level, road roughness and vehicle weight is investigated. The results show that the proposed method can effectively eliminate the dynamic fluctuation components and noise of the bridge response. In the case of good road roughness, the influence line can still be accurately identified when the vehicle speed reaches 15 m/s.

  • articleNo Access

    BENIN: Biologically enhanced network inference

    Gene regulatory network inference is one of the central problems in computational biology. We need models that integrate the variety of data available in order to use their complementarity information to overcome the issues of noisy and limited data. BENIN: Biologically Enhanced Network INference is our proposal to integrate data and infer more accurate networks. BENIN is a general framework that jointly considers different types of prior knowledge with expression datasets to improve the network inference. The method states the network inference as a feature selection problem and uses a popular penalized regression method, the Elastic net, combined with bootstrap resampling to solve it. BENIN significantly outperforms the state-of-the-art methods on the simulated data from the DREAM 4 challenge when combining genome-wide location data, knockout gene expression data, and time series expression data.

  • articleNo Access

    Cluster feature selection in high-dimensional linear models

    This paper concerns with variable screening when highly correlated variables exist in high-dimensional linear models. We propose a novel cluster feature selection (CFS) procedure based on the elastic net and linear correlation variable screening to enjoy the benefits of the two methods. When calculating the correlation between the predictor and the response, we consider highly correlated groups of predictors instead of the individual ones. This is in contrast to the usual linear correlation variable screening. Within each correlated group, we apply the elastic net to select variables and estimate their parameters. This avoids the drawback of mistakenly eliminating true relevant variables when they are highly correlated like LASSO [R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. Ser. B58 (1996) 268–288] does. After applying the CFS procedure, the maximum absolute correlation coefficient between clusters becomes smaller and any common model selection methods like sure independence screening (SIS) [J. Fan and J. Lv, Sure independence screening for ultrahigh dimensional feature space, J. R. Stat. Soc. Ser. B70 (2008) 849–911] or LASSO can be applied to improve the results. Extensive numerical examples including pure simulation examples and semi-real examples are conducted to show the good performances of our procedure.

  • chapterNo Access

    COLLOCATION-BASED SPARSE ESTIMATION FOR CONSTRUCTING DYNAMIC GENE NETWORKS

    One of the open problems in systems biology is to infer dynamic gene networks describing the underlying biological process with mathematical, statistical and computational methods. The first-order difference equation-based models such as dynamic Bayesian networks and vector autoregressive models were used to infer time-lagged relationships between genes from time-series microarray data. However, two primary problems greatly reduce the effectiveness of current approaches. The first problem is the tacit assumption that time lag is stationary. The second is the inseparability between measurement noise and process noise (unmeasured disturbances that pass through time process).

    To address these problems, we propose a stochastic differential equation model for inferring continuous-time dynamic gene networks under the situation in which both of the process noise and the observation noise exist. We present a collocation-based sparse estimation for simultaneous parameter estimation and model selection in the model. The collocation-based approach requires considerably less computational effort than traditional methods in ordinary stochastic differential equation models. We also incorporate various biological knowledge easily to refine the estimation accuracy with the proposed method. The results using simulated data and real time-series expression data of human primary small airway epithelial cells demonstrate that the proposed approach outperforms competing approaches and can provide significant genes influenced by gefitinib.

  • chapterOpen Access

    methylDMV: SIMULTANEOUS DETECTION OF DIFFERENTIAL DNA METHYLATION AND VARIABILITY WITH CONFOUNDER ADJUSTMENT

    DNA methylation has emerged as promising epigenetic markers for disease diagnosis. Both the differential mean (DM) and differential variability (DV) in methylation have been shown to contribute to transcriptional aberration and disease pathogenesis. The presence of confounding factors in large scale EWAS may affect the methylation values and hamper accurate marker discovery. In this paper, we propose a exible framework called methylDMV which allows for confounding factors adjustment and enables simultaneous characterization and identification of CpGs exhibiting DM only, DV only and both DM and DV. The proposed framework also allows for prioritization and selection of candidate features to be included in the prediction algorithm. We illustrate the utility of methylDMV in several TCGA datasets. An R package methylDMV implementing our proposed method is available at http://www.ams.sunysb.edu/~pfkuan/softwares.html#methylDMV.

  • chapterNo Access

    Survival Analysis with High-Dimensional Covariates

    Recent interest in the application of microarray technology focuses on relating gene expression profiles to censored survival outcome such as patients' overall survival time or time to cancer relapse. Due to the high-dimensional nature of the gene expression data, regularization becomes an effective approach for such analyses. In this chapter, we review several aspects of the recent development of penalized regression models for censored survival data with high-dimensional covariates, e.g. gene expressions. We first discuss the Cox proportional hazards model (Cox 1972) as the primary example and then the accelerated failure time model (Kalbfleisch and Prentice 2002) for further consideration.