Sparsest cut problems are very important graph partitions, which have been widely applied in expander graphs, Markov chains, and image segmentation. In this paper, we study the edge-weighted version of the Sparse Cut Problem, which minimizes the ratio of the total weight of edges between blocks and the total weight of edges incident to vertices in one block. We first prove that the problem is even NP-hard for an edge-weighted graph with bridges. Then, we combine and generalize submodular functions and principal partition to design a graph algorithm to improve the initial bipartition, which runs in polynomial time by using network flow as its subroutines.
Based on the convergent sequence of SDP relaxations for a multivariate polynomial optimization problem (POP) by Lasserre (2006), Waki et al. (2006) constructed a sequence of sparse SDP relaxations to solve sparse POPs efficiently. Nevertheless, the size of the sparse SDP relaxation is the major obstacle in order to solve POPs of higher degree. This paper proposes an approach to transform general POPs to quadratic optimization problems (QOPs), which allows to reduce the size of the SDP relaxation substantially. We introduce different heuristics resulting in equivalent QOPs and show how sparsity of a POP is maintained under the transformation procedure. As the most important issue, we discuss how to increase the quality of the SDP relaxation for a QOP. Moreover, we increase the accuracy of the solution of the SDP relaxation by applying additional local optimization techniques. Finally, we demonstrate the high potential of this approach through numerical results for large scale POPs of higher degree.
In this paper, we employ the sparsity-constrained least squares method to reconstruct sparse signals from the noisy measurements in high-dimensional case, and derive the existence of the optimal solution under certain conditions. We propose an inexact sparse-projected gradient method for numerical computation and discuss its convergence. Moreover, we present numerical results to demonstrate the efficiency of the proposed method.
Designing an efficient filtering technique is an ill-posed problem especially for image affected from high density of noise. The majority of existing techniques suffer from edge degradation and texture distortion issues. Therefore, in this paper, an efficient weighted nuclear norm minimization (NNM)-based filtering technique to preserve the edges and texture information of filtered images is proposed. The proposed technique significantly improves the quantitative improvements on the low rank approximation of nonlocal self-similarity matrices to deal with the overshrink problem. Extensive experiments reveal that the proposed technique preserves edges and texture details of filtered image with lesser number of visual artifacts on visual quality. The proposed technique outperforms the existing techniques over the competitive filtering techniques in terms of structural similarity index metric (SSIM), peak signal-to-noise ratio (PSNR) and edge preservation index (EPI).
A Bayesian classifier for sparsity-promoting feature selection is developed in this paper, where a set of nonlinear mappings for the original data is performed as a pre-processing step. The linear classification model with such mappings from the original input space to a nonlinear transformation space can not only construct the nonlinear classification boundary, but also realize the feature selection for the original data. A zero-mean Gaussian prior with Gamma precision and a finite approximation of Beta process prior are used to promote sparsity in the utilization of features and nonlinear mappings in our model, respectively. We derive the Variational Bayesian (VB) inference algorithm for the proposed linear classifier. Experimental results based on the synthetic data set, measured radar data set, high-dimensional gene expression data set, and several benchmark data sets demonstrate the aggressive and robust feature selection capability and comparable classification accuracy of our method comparing with some other existing classifiers.
In this paper, we propose a new technique for handling a class of sparse supervised learning problems, specifically focusing on regression, binary, and multi-class classification problems. Sparse regression problems are associated with datasets including both arithmetic and categorical features and by encoding the dataset leads to an underdetermined sparse linear system. Furthermore, logistic regression can be used for both sparse binary and multi-class classification problems, solving an underdetermined sparse linear system. A new technique called Sparse Approximate Pseudoinverse Preconditioning (SAPP), namely the Explicit Preconditioned Conjugate Gradient for Normal Equations (EPCGNE) method based on Generic Approximate Sparse Pseudoinverse matrices is introduced for solving underdetermined sparse least square problems. Numerical experiments were carried out demonstrating a significant improvement of the performance metrics for the proposed SAPP scheme compared to other learners.
In this paper, we study the sparsity of Hawking radiation from nonrotating singularity-free black holes in conformal gravity. We give a rigorous bound on the greybody factor for massless scalar field and calculate the sparsity of Hawking radiation from the black hole. Besides, we investigate the dependence of the bound of the greybody factor and the sparsity of Hawking radiation on the conformal parameters, respectively. Our study shows that when the conformal parameters are large, the increase of conformal parameters will lead to a more sparse Hawking radiation, while to a less sparse Hawking radiation if the conformal parameters are small.
A new deconvolution algorithm for retrieving a sparse reflectivity series from noisy seismic traces is proposed. The problem is formulated as a constrained minimization, taking the approximation zero norm of reflectivity as the objective function. The resulting minimization is solved efficiently by the trust-region based sequential quadratic programming (SQP) method, which provides global convergence and local quadratic convergence rates under suitable assumptions. The null space decomposition method and the de-biasing method are employed to reduce computational complexity and further improve the calculation accuracy. Synthetic simulations indicate that the spikes on the reflectivity, both their positions and amplitudes, are recovered effectively by the proposed approach.
In the Markowitz mean–variance portfolio optimization problem, the estimation of the inverse covariance matrix is not trivial and can even be intractable, especially when the dimension is very high. In this paper, we propose a linear-programming portfolio optimizer (LPO) to solve the Markowitz optimization problem in both low-dimensional and high-dimensional settings. Instead of directly estimating the inverse covariance matrix Σ−1, the LPO method estimates the portfolio weights Σ−1μ through solving an l1-constrained optimization problem. Moreover, we further prove that the LPO estimator asymptotically yields the maximum expected return while preserving the risk constraint. To offer a practical insight into the LPO approach, we provide a comprehensive implementation procedure of estimating portfolio weights via the Dantzig selector with sequential optimization (DASSO) algorithm and selecting the sparsity parameter through cross-validation. Simulations on both synthetic data and empirical data from Fama–French and the Center for Research in Security Prices (CRSP) databases validate the performance of the proposed method in comparison with other existing proposals.
Compressive sampling (CS) is a novel signal processing paradigm whereby the data compression is performed simultaneously with the sampling, by measuring some linear functionals of original signals in the analog domain. Once the signal is sparse sufficiently under some bases, it is strictly guaranteed to stably decompress/reconstruct the original one from significantly fewer measurements than that required by the sampling theorem, bringing considerable practical convenience. In the field of civil engineering, there are massive application scenarios for CS, as many civil engineering problems can be formulated as sparse inverse problems with linear measurements. In recent years, CS has gained extensive theoretical developments and many practical applications in civil engineering. Inevitable modelling and measurement uncertainties have motivated the Bayesian probabilistic perspective into the inverse problem of CS reconstruction. Furthermore, the advancement of deep learning techniques for efficient representation has also contributed to the elimination of the strict assumption of sparsity in CS. This paper reviews the advancements and applications of CS in civil engineering, focusing on challenges arising from data acquisition and analysis. The reviewed theories also have applicability to inverse problems in broader scientific fields.
The degraded image during the process of image analysis needs more number of iterations to restore it. These iterations take long waiting time and slow scanning, resulting in inefficient image restoration. A few numbers of measurements are enough to recuperate an image with good condition. Due to tree sparsity, a 2D wavelet tree reduces the number of coefficients and iterations to restore the degraded image. All the wavelet coefficients are extracted with overlaps as low and high sub-band space and ordered them such that they are decomposed in the tree ordering structured path. Some articles have addressed the problems with tree sparsity and total variation (TV), but few authors endorsed the benefits of tree sparsity. In this paper, a spatial variation regularization algorithm based on tree order is implemented to change the window size and variation estimators to reduce the loss of image information and to solve the problem of image smoothing operation. The acceptance rate of the tree-structured path relies on local variation estimators to regularize the performance parameters and update them to restore the image. For this, the Localized Total Variation (LTV) method is proposed and implemented on a 2D wavelet tree ordering structured path based on the proposed image smooth adjustment scheme. In the end, a reliable reordering algorithm proposed to reorder the set of pixels and to increase the reliability of the restored image. Simulation results clearly show that the proposed method improved the performance compared to existing methods of image restoration.
This paper studies some robust regression problems associated with the q-norm loss (q≥1) and the ϵ-insensitive q-norm loss in the reproducing kernel Hilbert space. We establish a variance-expectation bound under a priori noise condition on the conditional distribution, which is the key technique to measure the error bound. Explicit learning rates will be given under the approximation ability assumptions on the reproducing kernel Hilbert space.
In this paper, we consider to recover a signal which is sparse in terms of a tight frame from undersampled measurements via lp-minimization problem for 0<p≤1. In [Compressed sensing with coherent tight frames via lp-minimization for 0<p≤1, Inverse Probl. Imaging8 (2014) 761–777], Li and Lin proved that when 0<δ2s<1, there exists a ˉp∈(0,1], depending on δ2s such that for any p∈(0,ˉp], each solution of the lp-minimization problem can approximate the true signal well. The constant δ2s is referred to as the D-RIP constant of order 2s which was first introduced by Candès et al. in [Compressed sensing with coherent and redundant dictionaries, Appl. Comput. Harmon. Anal.31 (2011) 59–73]. The main aim of this paper is to give the closed-form expression of ˉp. We show that for every D-RIP constant δ2s∈(0,1), if 0<p≤ˉp, where
Spectral algorithms form a general framework that unifies many regularization schemes in learning theory. In this paper, we propose and analyze a class of thresholded spectral algorithms that are designed based on empirical features. Soft thresholding is adopted to achieve sparse approximations. Our analysis shows that without sparsity assumption of the regression function, the output functions of thresholded spectral algorithms are represented by empirical features with satisfactory sparsity, and the convergence rates are comparable to those of the classical spectral algorithms in the literature.
We consider the minimum norm interpolation problem in the ℓ1(ℕ) space, aiming at constructing a sparse interpolation solution. The original problem is reformulated in the pre-dual space, thereby inducing a norm in a related finite-dimensional Euclidean space. The dual problem is then transformed into a linear programming problem, which can be solved by existing methods. With that done, the original interpolation problem is reduced by solving an elementary finite-dimensional linear algebra equation. A specific example is presented to illustrate the proposed method, in which a sparse solution in the ℓ1(ℕ) space is compared to the dense solution in the ℓ2(ℕ) space. This example shows that a solution of the minimum norm interpolation problem in the ℓ1(ℕ) space is indeed sparse, while that of the minimum norm interpolation problem in the ℓ2(ℕ) space is not.
Natural images are often the superposition of various parts of different geometric characteristics. For instance, an image might be a mixture of cartoon and texture structures. In addition, images are often given with missing data. In this paper, we develop a method for simultaneously decomposing an image to its two underlying parts and inpainting the missing data. Our separation–inpainting method is based on an l1 minimization approach, using two dictionaries, each sparsifying one of the image parts but not the other. We introduce a comprehensive convergence analysis of our method, in a general setting, utilizing the concepts of joint concentration, clustered sparsity, and cluster coherence. As the main application of our theory, we consider the problem of separating and inpainting an image to a cartoon and texture parts.
Collaborative filtering-based recommendation systems have become significant in various domains due to their ability to provide personalised recommendations. In e-commerce, these systems analyse the browsing history and purchase patterns of users to recommend items. In the entertainment industry, collaborative filtering helps platforms like Netflix and Spotify recommend movies, shows and songs based on users’ past preferences and ratings. This technology also finds significance in online education, where it assists in suggesting relevant courses and learning materials based on a user’s interests and previous learning behaviour. Even though much research has been done in this domain, the problems of sparsity and scalability in collaborative filtering still exist. Data sparsity refers to too few preferences of users on items, and hence it would be difficult to understand users’ preferences. Recommendation systems must keep users engaged with fast responses, and hence there is a challenge in handling large data as these days it is growing quickly. Sparsity affects the recommendation accuracy, while scalability influences the complexity of processing the recommendations. The motivation behind the paper is to design an efficient algorithm to address the sparsity and scalability problems, which in turn provide a better user experience and increased user satisfaction. This paper proposes two separate, novel approaches that deal with both problems. In the first approach, an improved autoencoder is used to address sparsity, and later, its outcome is processed in a parallel and distributed manner using a MapReduce-based k-means clustering algorithm with the Elbow method. Since the k-means clustering technique uses a predetermined number of clusters, it may not improve accuracy. So, the elbow method identifies the optimal number of clusters for the k-means algorithm. In the second approach, a MapReduce-based Gaussian Mixture Model (GMM) with Expectation-Maximization (EM) is proposed to handle large volumes of sparse data. Both the proposed algorithms are implemented using MovieLens 20M and Netflix movie recommendation datasets to generate movie recommendations and are compared with the other state-of-the-art approaches. For comparison, metrics like RMSE, MAE, precision, recall, and F-score are used. The outcomes demonstrate that the second proposed strategy outperformed state-of-the-art approaches.
Sparse LS-SVM yields better generalization capability and reduces prediction time in comparison to full dense LS-SVM. However, both methods require careful selection of hyper-parameters (HPS) to achieve high generalization capability. Leave-One-Out Cross Validation (LOO-CV) and k-fold Cross Validation (k-CV) are the two most widely used hyper-parameter selection methods for LS-SVMs. However, both fail to select good hyper-parameters for sparse LS-SVM. In this paper we propose a new hyper-parameter selection method, LGEM-HPS, for LS-SVM via minimization of the Localized Generalization Error (L-GEM). The L-GEM consists of two major components: empirical mean square error and sensitivity measure. A new sensitivity measure is derived for LS-SVM to enable the LGEM-HPS select hyper-parameters yielding LS-SVM with smaller training error and minimum sensitivity to minor changes in inputs. Experiments on eleven UCI data sets show the effectiveness of the proposed method for selecting hyper-parameters for sparse LS-SVM.
In this paper, we propose a sparse discriminative least squares regression (SDLSR) for multicategory classification. The main mechanism of SDLSR is to apply L1-norm for regularizing the objective function of discriminative least squares regression (DLSR). Through theoretical analysis, it is obtained that SDLSR can be viewed as a relaxation of L1-SVM. Moreover, we theoretically prove that sparsity of objective function can effectively decrease the number of support vectors (SV). To optimize our proposed SDLSR, an efficient iterative algorithm is proposed. The experimental results confirm the effectiveness of our proposed SDLSR.
Due to the demand for tackling the problem of streaming data with high-dimensional covarites, we propose an online sparse sliced inverse regression (OSSIR) method for online sufficient dimension reduction (SDR). The existing online SDR methods focus on the case when p (dimension of covariates) is small. In this paper, we adapt the sparse sliced inverse regression to cope with high-dimensional streaming data where the dimension p is large. There are two important steps in our method, one is to extend the online principal component analysis to iteratively obtain the eigenvalues and eigenvectors of the kernel matrix, the other is to use the truncated gradient to perform online L1 regularization. Theoretical properties of the proposed online learner are established. By comparing with several existing methods in simulations and real data applications, we demonstrate the effectiveness and efficiency of our method.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.