Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleOpen Access

    Heterogeneous Regularization for Fast Rendering Using Deep Spike Neural Network

    A Deep Spiking Neural Network (DSNN) with Heterogeneous Regularization learning technique is proposed to build a more biologically plausible approach that evaluates the amount of noise and finds a stopping criterion for fast realistic illumination. Our contribution is to introduce a model that improves the label propagation of DSNN and is more efficient on neuromorphic hardware than a corresponding Artificial Neural Network. More specifically, we develop a biological neural model with a heterogeneous regularization technique that works similarly to a human brain and can detect noise using deep spikes without relying on mathematical metrics to extract noise features. The objective function of the proposed DSNN consists of a supervised term and an unsupervised term. The supervised term enforces the matching term between the predicted labels and the known labels. The unsupervised term enforces the smoothness of the predicted labels of the entire data samples. By learning a DSNN with the proposed objective function, we are able to develop a more powerful learning algorithm. Experiments were conducted using scenes with Global Illumination and various image distortions. The proposed model was also compared with the human visual system and other state-of-the-art models. The results show better performance and advantages in terms of efficiency, an increasingly biologically plausible network, and ease of implementation in Neuromorphic Hardware.

  • articleNo Access

    Laplacian Embedded Infinite Kernel Model for Semi-Supervised Classification

    Promoted by its convexity and low time complexity, Laplacian embedded support vector regression (LapESVR) model based on manifold regularization (MR) has assumed an important role in semi-supervised classification. Conventionally, the LapESVR model is based on a single kernel function that is intrinsically capable of describing one feature mapping relation only. However, when the data to be processed is from a complex dataset where multiple features of the data are required to be treated, the classification performance using the LapESVR based on a single kernel substantially degrade, indicating that the classification requirement in this case is beyond the capability of the LapESVR. In addition, the processing data is often subject to the impact of abnormal data samples; therefore, in practice assigning a fixed value that is related to the average distance of the processing data as the parameter value of kernel function of the LapESVR is by no means optimal. To solve the problems as mentioned regarding the LapESVR, this paper proposes a Laplacian embedded infinite kernel regression (LapEIKR) model. The proposed model combines the multiple kernels linearly to improve its ability of characterization of the processing data, typical in semi-supervised classification of complex datasets, with multiple features. Further, the parameter setting of the multiple kernels of the LapEIKR model is turned into an optimization problem by formulating a corresponding minimum objective function and an iterative algorithm, and then the values of the settings are facilitated to be obtained by a formulated calculation, assuming the optimal values with respect to the designed objective function. Comparative experiments on the UCI datasets, benchmark datasets and Caltech256 datasets show that the proposed LapEIKR model is improving in terms of adaptivity and efficiency.

  • articleNo Access

    Partial Label Dimensional Reduction via Semantic Difference Information and Manifold Regularization

    Partial label learning is a rising weakly supervised learning framework that deals with the problem that each training instance is associated with a set of candidate labels, where only one is correct. The dimensionality reduction mechanism can effectively improve the learning system’s generalization performance. However, due to the ambiguity of the ground-truth label, it is hard to use traditional dimensionality reduction methods directly on partially labeled data sets. Existing research attempts to exploit the linear discriminant analysis (LDA) strategy to reduce the dimensionality of partial label training examples. Unfortunately, this method still suffered from false-positive labels. This paper proposed an LDA-based dimensionality reduction approach named SDIMR for partially labeled datasets. On the one hand, introduce the manifold regularization term with semantic difference information to perform dimensionality reduction procedures and simultaneously maintain the local manifold structure. On the other hand, the iterative process alternates between dimensionality reduction and disambiguation. Briefly, our approach consists of three stages: Firstly, utilize the graph structure with semantic difference information to describe the topological information in the feature space. Secondly, we propose a new objective function containing manifold regularization terms to reduce the dimensionality of original training data to generate a new training data set. Finally, the k-nearest neighbor method is employed to disambiguate the dimensionality-reduced data. Extensive experiments on artificial and real-world data sets show that our method can be comparable to state-of-the-art partial label learning algorithms.

  • articleNo Access

    Enhancing performance of the back-propagation algorithm based on a novel regularization method of preserving inter-object-distance of data

    Artificial neural networks, consisting of many levels of nonlinearities, have been widely used to deal with various supervised learning tasks. At present, the most popular and effective training method is back-propagation algorithm (BP). Inspired by manifold regularization framework, we introduce a novel regularization framework, which aims at preserving the inter-object-distance of the data. Then a refined BP algorithm (IOD-BP) is proposed by imposing the proposed regularization framework into the objective function of BP algorithm. Comparative experiments on various benchmark classification tasks show that the new regularization BP method significantly improves the performance of BP algorithm in terms of classification accuracy.