Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    SUFFICIENT CONDITIONS FOR INTERPOLATION AND SAMPLING HYPERSURFACES IN THE BERGMAN BALL

    We give sufficient conditions for a closed smooth hypersurface W in the n-dimensional Bergman ball to be interpolating or sampling. As in the recent work [5] of Ortega-Cerdà, Schuster and the second author on the Bargmann–Fock space, our sufficient conditions are expressed in terms of a geometric density of the hypersurface that, though less natural, is shown to be equivalent to Bergman ball analogs of the Beurling-type densities used in [5]. In the interpolation theorem we interpolate L2 data from W to the ball using the method of Ohsawa–Takegoshi, extended to the present setting, rather than the Cousin I approach used in [5]. In the sampling theorem, our proof is completely different from [5]. We adapt the more natural method of Berndtsson and Ortega-Cerdà [1] to higher dimensions. This adaptation motivated the notion of density that we introduced. The approaches of [5] and the present paper both work in either the case of the Bergman ball or of the Bargmann–Fock space.

  • articleNo Access

    BUDGET ESTIMATION AND CONTROL FOR BAG-OF-TASKS SCHEDULING IN CLOUDS

    Commercial cloud offerings, such as Amazon's EC2, let users allocate compute resources on demand, charging based on reserved time intervals. While this gives great flexibility to elastic applications, users lack guidance for choosing between multiple offerings, in order to complete their computations within given budget constraints. In this work, we present BaTS, our budget-constrained scheduler. Using a small task sample, BaTS can estimate costs and makespan for a given bag on different cloud offerings. It provides the user with a choice of options before execution and then schedules the bag according to the user's preferences. BaTS requires no a-priori information about task completion times. We evaluate BaTS by emulating different cloud environments on the DAS-3 multi-cluster system. Our results show that BaTS correctly estimates budget and makespan for the scenarios investigated; the user-selected schedule is then executed within the given budget limitations.

  • articleNo Access

    E-BaTS: Energy-Aware Scheduling for Bag-of-Task Applications in HPC Clusters

    High-Performance Computing (HPC) systems consume large amounts of energy. As the energy consumption predictions for HPC show increasing numbers, it is important to make users aware of the energy spent for the execution of their applications. Drawing from our experience with exposing cost and performance in public clouds, in this paper we present a generic mechanism to compute fast and accurate estimates for the tradeoffs between the performance (expressed as makespan) and the energy consumption of applications running on HPC clusters. We validate our approach by implementing it in a prototype, called E-BaTS and validating it with a wide variety of HPC bags-of-tasks. Our experiments show that E-BaTS produces conservative estimates with errors below 5%, while requiring at most 12% of the energy and time of an exhaustive search for providing configurations close to the optimal ones in terms of trade-offs between energy consumption and makespan.

  • articleNo Access

    Data resampling: An approach for improving characterization of complex dynamics from noisy interspike intervals

    Extracting dynamics from point processes produced by different models describing spiking phenomena depends on several factors affecting the quality of reconstruction of nonuniformly sampled dynamical systems. Although its ability is verified by embedding theorems analogous to the Takens theorem for uniformly sampled time series, a limited amount of samples, a low firing rate and the presence of noise can provide significant computational errors and incorrect characterization of the analyzed oscillatory regimes. Here, we discuss how to improve the accuracy of the quantitative evaluation of complex oscillations from point processes using data resampling. This approach provides a more stable estimation of Lyapunov exponents for noisy datasets. The advantages of resampling-based reconstruction are confirmed by the analysis of various spiking mechanisms, including the generation of single firing events and chaotic bursts.

  • articleNo Access

    MINIMAL CONSISTENT SUBSET FOR HYPER SURFACE CLASSIFICATION METHOD

    Hyper Surface Classification (HSC), which is based on Jordan Curve Theorem in Topology, has proven to be a simple and effective method for classifying a larger database in our previous work. To select a representative subset from the original sample set, the Minimal Consistent Subset (MCS) of HSC is studied in this paper. For HSC method, one of the most important features of MCS is that it has the same classification model as the entire sample dataset, and can totally reflect its classification ability. From this point of view, MCS is the best way of sampling from the original dataset for HSC. Furthermore, because of the minimum property of MCS, every single deletion or multiple deletions from it will lead to a reduction in generalization ability, which can be exactly predicted by the proposed formula in this paper.

  • articleNo Access

    STRUCTURE-EMBEDDED AUC-SVM

    AUC-SVM directly maximizes the area under the ROC curve (AUC) through minimizing its hinge loss relaxation, and the decision function is determined by those support vector sample pairs playing the same roles as the support vector samples in SVM. Such a learning paradigm generally emphasizes more on the local discriminative information just associated with these support vectors whereas hardly takes the overall view of data into account, thereby it may incur loss of the global distribution information in data favorable for classification. Moreover, due to the high computational complexity of AUC-SVM induced by the large number of training sample pairs quadratic in the number of samples, sampling is usually adopted, incurring a further loss of the distribution information in data. In order to compensate the distribution information loss and simultaneously boost the AUC-SVM performance, in this paper, we develop a novel structure-embedded AUC-SVM (SAUC-SVM for short) through embedding the global structure information in the whole data into AUC-SVM. With such an embedding, the proposed SAUC-SVM incorporates the local discriminative information and global structure information in data into a uniform formulation and consequently guarantees better generalization performance. Comparative experiments on both synthetic and real datasets confirm its effectiveness.

  • articleNo Access

    STiMR k-Means: An Efficient Clustering Method for Big Data

    Big Data clustering has become an important challenge in data analysis since several applications require scalable clustering methods to organize such data into groups of similar objects. Given the computational cost of most of the existing clustering methods, we propose in this paper a new clustering method, referred to as STiMR k-means, able to provide good tradeoff between scalability and clustering quality. The proposed method is based on the combination of three acceleration techniques: sampling, triangle inequality and MapReduce. Sampling is used to reduce the number of data points when building cluster prototypes, triangle inequality is used to reduce the number of comparisons when looking for nearest clusters and MapReduce is used to configure a parallel framework for running the proposed method. Experiments performed on simulated and real datasets have shown the effectiveness of the proposed method, with the existing ones, in terms of running time, scalability and internal validity measures.

  • articleNo Access

    Fractal Feature Based Image Resolution Enhancement Using Wavelet–Fractal Transformation in Gradient Domain

    The fractal geometries are applied extensively in many applications like pattern recognition, texture analysis and segmentation. The application of fractal geometry requires estimation of the fractal features. The fractal dimension and fractal length are found effective to analyze and measure image features, such as texture, resolution, etc. This paper proposes a new wavelet–fractal technique for image resolution enhancement. The resolution of the wavelet sub-bands are improved using scaling operator and then it is transformed into texture vector. The proposed method then computes fractal dimension and fractal length in gradient domain which is used for resolution enhancement. It is observed that by using scaling operator in the gradient domain, the fractal dimension and fractal length becomes scale invariant. The major advantage of the proposed wavelet–fractal technique is that the feature vector retains fractal dimension and fractal length both. Thus, the resolution enhanced image restores the texture information well. The texture information has also been observed in terms of fractal dimension with varied sample size. We present qualitative and quantitative analysis of the proposed method with existing state of art methods.

  • articleNo Access

    DYNAMICS OF PIECEWISE LINEAR DISCONTINUOUS MAPS

    In this paper, the dynamics of maps representing classes of controlled sampled systems with backlash are examined. First, a bilinear one-dimensional map is considered, and the analysis shows that, depending on the value of the control parameter, all orbits originating in an attractive set are either periodic or dense on the attractor. Moreover, the dense orbits have sensitive dependence on initial data, but behave rather regularly, i.e. they have quasiperiodic subsequences and the Lyapunov exponent of every orbit is zero. The inclusion of a second parameter, the processing delay, in the model leads to a piecewise linear two-dimensional map. The dynamics of this map are studied using numerical simulations which indicate similar behavior as in the one-dimensional case.

  • articleNo Access

    PROVABLE DIMENSION DETECTION USING PRINCIPAL COMPONENT ANALYSIS

    We analyze an algorithm based on principal component analysis (PCA) for detecting the dimension k of a smooth manifold formula from a set P of point samples. The best running time so far is O(d 2O(k7log k)) by Giesen and Wagner after the adaptive neighborhood graph is constructed. Given the adaptive neighborhood graph, the PCA-based algorithm outputs the true dimension in O(d2O(k)) time, provided that P satisfies a standard sampling condition as in previous results. Our experimental results validate the effectiveness of the approach. A further advantage is that both the algorithm and its analysis can be generalized to the noisy case, in which small perturbations of the samples and a small portion of outliers are allowed.

  • articleNo Access

    AUTOMATIC RECONSTRUCTION OF 3D CAD MODELS FROM DIGITAL SCANS

    We present an approach for the reconstruction and approximation of 3D CAD models from an unorganized collection of points. Applications include rapid reverse engineering of existing objects for use in a virtual prototyping environment, including computer aided design and manufacturing. Our reconstruction approach is flexible enough to permit interpolation of both smooth surfaces and sharp features, while placing few restrictions on the geometry or topology of the object.

    Our algorithm is based on alpha-shapes to compute an initial triangle mesh approximating the surface of the object. A mesh reduction technique is applied to the dense triangle mesh to build a simplified approximation, while retaining important topological and geometric characteristics of the model. The reduced mesh is interpolated with piecewise algebraic surface patches which approximate the original points.

    The process is fully automatic, and the reconstruction is guaranteed to be homeomorphic and error bounded with respect to the original model when certain sampling requirements are satisfied. The resulting model is suitable for typical CAD modeling and analysis applications.

  • articleNo Access

    On the continuum limit of epidemiological models on graphs: Convergence and approximation results

    We focus on an epidemiological model (the archetypical SIR system) defined on graphs and study the asymptotic behavior of the solutions as the number of vertices in the graph diverges. By relying on the theory of graphons we provide a characterization of the limit and establish convergence results. We also provide approximation results for both deterministic and random discretizations.

  • articleNo Access

    Optimization by linear kinetic equations and mean-field Langevin dynamics

    One of the most striking examples of the close connections between global optimization processes and statistical physics is the simulated annealing method, inspired by the famous Monte Carlo algorithm devised by Metropolis et al. in the middle of last century. In this paper, we show how the tools of linear kinetic theory allow the description of this gradient-free algorithm from the perspective of statistical physics and how convergence to the global minimum can be related to classical entropy inequalities. This analysis highlights the strong link between linear Boltzmann equations and stochastic optimization methods governed by Markov processes. Thanks to this formalism, we can establish the connections between the simulated annealing process and the corresponding mean-field Langevin dynamics characterized by a stochastic gradient descent approach. Generalizations to other selection strategies in simulated annealing that avoid the acceptance–rejection dynamic are also provided.

  • articleNo Access

    An Instance Selection Algorithm Based on ReliefF

    Due to the increasing growth of data, many methods are proposed to extract useful data and remove noisy data. Instance selection is one of these methods which selects some instances of a data set and removes others. This paper proposes a new instance selection algorithm based on ReliefF, which is a feature selection algorithm. In the proposed algorithm, based on the Jaccard index, the nearest instances of each class are found for each instance. Then, based on the nearest neighbor’s set, the weight of each instance is calculated. Finally, only instances with more weights are selected. This algorithm can reduce data at a specified rate and have the ability to run parallel on the instances. It can work on a variety of data sets with nominal and numeric data with missing values and is also suitable for working with imbalanced data sets. The proposed algorithm tests on three data sets. Results show that the proposed algorithm can reduce the volume of data, without a significant change in classification accuracy of these datasets.

  • articleNo Access

    SCALING OF MULTIFRACTAL MEASURES UNDER AFFINE TRANSFORMATIONS

    Fractals01 Jun 2002

    Multifractal measures have the property of being statistically invariant under some group of space and field transformations. In general, the measures lack invariance for transformations outside this group, but we show that an important exception exists. We consider measures that are invariant under isotropic contraction and examine how their marginal distribution changes under affine space transformations. We find that, for regions that are highly elongated in one direction, the marginal distribution satisfies a multifractal scaling relation that depends on the direction of elongation. We use numerical simulation to investigate the transition range between asymptotic scaling regimes, when the regions do not have extreme elongation in any one direction. The results have practical implications on the inference of multifractal properties from observations in regions with one or more fixed dimensions.

  • articleNo Access

    FAST HIERARCHICAL DISCRETIZATION OF PARAMETRIC BOUNDARY REPRESENTATIONS

    Prevalent discretization methods based on Delaunay triangulations and advancing fronts, which sample and mesh simultaneously, can guarantee well shaped triangles but at a fairly high computational cost. In this paper we present a novel and flexible two-part sampling and meshing algorithm, which produces topologically correct meshes on arbitrary boundary representations whose faces are represented parametrically, without requiring an initial coarse mesh. Our method is based on a hybrid spatial partitioning scheme driven by user-designed subdivision rules that combines the power of quadtree decomposition with the flexibility of the binary decompositions to produce meshes that favor prescribed geometric properties. Importantly, the algorithm offers a performance increase of approximately two orders of magnitude over Delaunay based methods and at least one order of magnitude over advancing front methods. At the same time, our algorithm is practically as fast as the computationally optimal algorithm based on a pure quadtree decomposition, but with a markedly better distribution in the regions with parametric distortion. The hierarchical nature of our surface decomposition is well suited to interactive applications and multithreaded implementation.

  • articleNo Access

    Real-Time Continuous Image Processing

    In this work, we propose a framework that performs a number of popular image-processing operations in the continuous domain. This is in contrast to the standard practice of defining them as operations over discrete sequences of sampled values. The guiding principle is that, in order to prevent aliasing, nonlinear image-processing operations should ideally be performed prior to prefiltering and sampling. This is of course impractical, as we may not have access to the continuous input. Even so, we show that it is best to apply image-processing operations over the continuous reconstruction of the input. This transformed continuous representation is then prefiltered and sampled to produce the output. The use of high-quality reconstruction strategies brings this alternative much closer to the ideal than directly operating over discrete values. We illustrate the advantages of our framework with several popular effects. In each case, we demonstrate the quality difference between continuous image-processing, their discrete counterparts and previous anti-aliasing alternatives. Finally, our GPU implementation shows that current graphics hardware has enough computational power to perform continuous image processing in real-time.

  • articleNo Access

    Deep neural networks can stably solve high-dimensional, noisy, non-linear inverse problems

    We study the problem of reconstructing solutions of inverse problems when only noisy measurements are available. We assume that the problem can be modeled with an infinite-dimensional forward operator that is not continuously invertible. Then, we restrict this forward operator to finite-dimensional spaces so that the inverse is Lipschitz continuous. For the inverse operator, we demonstrate that there exists a neural network which is a robust-to-noise approximation of the operator. In addition, we show that these neural networks can be learned from appropriately perturbed training data. We demonstrate the admissibility of this approach to a wide range of inverse problems of practical interest. Numerical examples are given that support the theoretical findings.

  • articleNo Access

    OPTIMIZATION BIAS IN ENERGY-BASED STRUCTURE PREDICTION

    Physics-based computational approaches to predicting the structure of macromolecules such as proteins are gaining increased use, but there are remaining challenges. In the current work, it is demonstrated that in energy-based prediction methods, the degree of optimization of the sampled structures can influence the prediction results. In particular, discrepancies in the degree of local sampling can bias the predictions in favor of the oversampled structures by shifting the local probability distributions of the minimum sampled energies. In simple systems, it is shown that the magnitude of the errors can be calculated from the energy surface, and for certain model systems, derived analytically. Further, it is shown that for energy wells whose forms differ only by a randomly assigned energy shift, the optimal accuracy of prediction is achieved when the sampling around each structure is equal. Energy correction terms can be used in cases of unequal sampling to reproduce the total probabilities that would occur under equal sampling, but optimal corrections only partially restore the prediction accuracy lost to unequal sampling. For multiwell systems, the determination of the correction terms is a multibody problem; it is shown that the involved cross-correlation multiple integrals can be reduced to simpler integrals. The possible implications of the current analysis for macromolecular structure prediction are discussed.

  • articleNo Access

    SAMPLING AND OVERSAMPLING IN SHIFT-INVARIANT AND MULTIRESOLUTION SPACES I: VALIDATION OF SAMPLING SCHEMES

    We ask what conditions can be placed on generators φ of principal shift invariant spaces to ensure the validity of analogues of the classical sampling theorem for bandlimited signals. Critical rate sampling schemes lead to expansion formulas in terms of samples, while oversampling schemes can lead to expansions in which function values depend only on nearby samples. The basic techniques for validating such schemes are built on the Zak transform and the Poisson summation formula. Validation conditions are phrased in terms of orthogonality, smoothness, and self-similarity, as well as bandlimitedness or compact support of the generator. Effective sampling rates which depend on the length of support of the generator or its Fourier transform are derived.