Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    NO-REFERENCE VIDEO QUALITY MEASUREMENT WITH SUPPORT VECTOR REGRESSION

    A novel approach for no-reference video quality measurement is proposed in this paper. Firstly, various feature extraction methods are used to quantify the quality of videos. Then, a support vector regression model is trained and adopted to predict unseen samples. Six different regression models are compared with the support vector regression model. The experimental results indicate that the combination of different video quality features with a support vector regression model can outperform other methods for no-reference video quality measurement significantly.

  • articleNo Access

    Traffic-driven SIR epidemic spread dynamics on scale-free networks

    Traffic flow affects the transmission and distribution of pathogens. The large-scale traffic flow that emerges with the rapid development of global economic integration plays a significant role in the epidemic spread. In order to more accurately indicate the time characteristics of the traffic-driven epidemic spread, new parameters are added to represent the change of the infection rate parameter over time on the traffic-driven Susceptible–Infected–Recovered (SIR) epidemic spread model. Based on the collected epidemic data in Hebei Province, a linear regression method is performed to estimate the infection rate parameter and an improved traffic-driven SIR epidemic spread dynamics model is established. The impact of different link-closure rules, traffic flow and average degree on the epidemic spread is studied. The maximum instantaneous number of infected nodes and the maximum number of ever infected nodes are obtained through simulation. Compared to the simulation results of the links being closed between large-degree nodes, closing the links between small-degree nodes can effectively inhibit the epidemic spread. In addition, reducing traffic flow and increasing the average degree of the network can also slow the epidemic outbreak. The study provides the practical scientific basis for epidemic prevention departments to conduct traffic control during epidemic outbreaks.

  • articleNo Access

    BUDGET ESTIMATION AND CONTROL FOR BAG-OF-TASKS SCHEDULING IN CLOUDS

    Commercial cloud offerings, such as Amazon's EC2, let users allocate compute resources on demand, charging based on reserved time intervals. While this gives great flexibility to elastic applications, users lack guidance for choosing between multiple offerings, in order to complete their computations within given budget constraints. In this work, we present BaTS, our budget-constrained scheduler. Using a small task sample, BaTS can estimate costs and makespan for a given bag on different cloud offerings. It provides the user with a choice of options before execution and then schedules the bag according to the user's preferences. BaTS requires no a-priori information about task completion times. We evaluate BaTS by emulating different cloud environments on the DAS-3 multi-cluster system. Our results show that BaTS correctly estimates budget and makespan for the scenarios investigated; the user-selected schedule is then executed within the given budget limitations.

  • articleFree Access

    WHETHER CONSUMER SATISFACTION BENEFITS THE INVESTMENT PORTFOLIO: EMPIRICAL EVIDENCE FROM HONG KONG

    This paper aims to investigate the role of a consumer satisfaction index (CSI) for financial investments in the Hong Kong market. Using yearly data for Hong Kong consumer satisfaction index (HKCSI) to compile a CSI at company level, the effect of consumer satisfaction on company market value is identified. A hypothesized investment portfolio based only on CSI at company level is created, and its return compares with a widely used index measuring stock market performance in Hong Kong. A formal statistical test on the outperformance of portfolios that load on consumer satisfaction is conducted. Using the Capital Asset Pricing Model (CAPM), the beta risk of the entire time period is evaluated, and shows that the portfolio risk based on company level CSI is not significantly different than the market risk. This paper concludes therefore that consumer satisfaction can be incorporated into financial models and applied for formulating investment portfolios with better performance than the market rate in Hong Kong.

  • articleNo Access

    A GOAL PROGRAMMING APPROACH TO FUZZY LINEAR REGRESSION WITH NON-FUZZY INPUT AND FUZZY OUTPUT DATA

    Many researches have been carried out in fuzzy linear regression since the past three decades. In this paper, a fuzzy linear regression model based on goal programming is proposed. The proposed model takes into account the centers of fuzzy data as an important feature as well as their spreads. Furthermore, the model can deal with both symmetric and non-symmetric data. To show the efficiency of proposed model, it is compared with some earlier methods based on simulation studies and numerical examples. Moreover, the sensitivity of the model to outliers is discussed.

  • articleNo Access

    Linear regression analysis of MHD Maxwell nanofluid flow over a stretched surface of varying thickness with heat flux and chemical reaction

    Non-Newtonian materials have been an appealing topic for researchers because of the variety of laboratory and industrial process involving these fluids. There are several kinds of non-Newtonian fluids classified according to their properties. In this study, the Maxwell fluid model is analyzed due to the unique properties and applications of this non-Newtonian material. We have considered the Buongiorno model for nanofluid, which is a two-phase model that accounts for the effects of Brownian motion and thermophoresis on the transport of nanoparticles in a fluid. A stretching surface holding a chemically reactive fluid is assumed. In addition, the study also considers the impacts of heat flux and magnetic fields. The influence of various physical factors on the flow fields is presented and graphically highlighted. Using linear regression and the data point approach, the relationship between the physical parameters, such as rate of heat and mass transfer, at the surface is investigated. The relationship between the various physical parameters was investigated using the t-test approach. The Maxwell fluid parameter influences heat transmission at the surface. As the magnetic field and heat source parameters increases, the rate of heat transfer decreases. Increasing the Deborah number, chemical reaction parameter and magnetic field parameter enhances the mass transfer rate at the surface. The fluid’s velocity decreases with rising magnetic field and Maxwell fluid parameters. The heat source parameter elevates fluid temperature, while inclusion of the chemical reactions parameter reduces nanoparticle concentration.

  • articleNo Access

    Molecular insights into anti-Alzheimer’s drugs through predictive modeling using linear regression and QSPR analysis

    The purpose of this paper is to discuss the use of topological indices (TIs) to anticipate the physical and biological aspects of innovative drugs used in the treatment of Alzheimer’s disease. Degree-based topological indices are generated using edge partitioning to assess the drugs Tacrine, Donepezil, Ravistigmine, Butein, Licochalcone-A and Flavokqwain-A. Furthermore, using linear regression, a quantitative structure–property relationship (QSPR) model is developed to predict the characteristics such as boiling point (BP), flash point (FP), molar volume (MV), molecular weight, complexity and polarizability. The findings show that topological indices have the potential to be used as a tool for drugs discovery and design in the field of Alzheimer’s disease treatment.

  • articleNo Access

    LOCAL SKEW CORRECTION IN DOCUMENTS

    In this paper we propose a technique for detecting and correcting the skew of text areas in a document. The documents we work with may contain several areas of text with different skew angles. First, a text localization procedure is applied based on connected components analysis. Specifically, the connected components of the document are extracted and filtered according to their size and geometric characteristics. Next, the candidate characters are grouped using a nearest neighbor approach to form words and then based on these words text lines of any skew are constructed. Then, the top-line and baseline for each text line are estimated using linear regression. Text lines in near locations, having similar skew angles, are grown to form text areas. For each text area a local skew angle is estimated and then these text areas are skew corrected independently to horizontal or vertical orientation. The technique has been extensively tested on a variety of document images and its accuracy and robustness is compared with other existing techniques.

  • articleNo Access

    VARIANT POSE FACE RECOGNITION USING DISCRETE WAVELET TRANSFORM AND LINEAR REGRESSION

    Face recognition in constraint conditions is no longer a further challenge. However, even the best method is not able to cope with real world situations. In this paper, a robust method is proposed such that the performance of the face recognition system is still highly reliable even if the face undergoes large head rotation. Our proposed method considers local regions from half side of face rather than using the holistic face approach since in the former approach the "linearity" of features within the limited region is somewhat preserved regardless of the pose variation. Discrete wavelet transform is then utilized onto these patches in order to form face feature vectors. We train our recognizer using linear regression algorithm to interpret the relationship between a face vector for a specific pose and its corresponding frontal face feature vector. We demonstrate that our proposed method is able to recognize a non-frontal face with high accuracy even under low-resolution image by relying only on single frontal face in the database.

  • articleNo Access

    A One-Sample per Individual Face Recognition Algorithm Based on Multiple One-Dimensional Projection Lines

    This paper proposes a novel approach for face recognition when only one sample per individual is available. The proposed technique, referred to as MODPL, determines a one-dimensional projection line for each individual in the dataset. Each of these lines discriminates the corresponding individual with respect to the other people in the database. The vector consisting on the projections of the individual’s raw data on the different projections lines provides an excellent characterization of the individual. Results obtained using the XM2VTS database show that the proposed technique is capable of achieving classification rates similar to the ones obtained by means of the Uniform-pursuit algorithm and at least 5% higher than other currently used techniques that deal with the one sample problem. Two additional sets of experiments were conducted on the BioID and AR databases, where the proposed algorithm showed a performance similar to the state-of-the-art algorithms. Moreover, the proposed technique allows the visualization of the most discriminative features of the individuals.

  • articleNo Access

    Neuro-Scientific Analysis of Weights in Neural Networks

    Deep learning is a popular topic among machine learning researchers nowadays, with great strides being made in recent years to develop robust artificial neural networks for faster convergence to a reasonable accuracy. Network architecture and hyperparameters of the model are fundamental aspects of model convergence. One such important parameter is the initial values of weights, also known as weight initialization. In this paper, we perform two research tasks concerned with the weights of neural networks. First, we develop three novel weight initialization algorithms inspired by the neuroscientific construction of the mammalian brains and then test them on benchmark datasets against other algorithms to compare and assess their performance. We call these algorithms the lognormal weight initialization, modified lognormal weight initialization, and skewed weight initialization. We observe from our results that these initialization algorithms provide state-of-the-art results on all of the benchmark datasets. Second, we analyze the influence of training an artificial neural network on its weight distribution by measuring the correlation between the quantitative metrics of skewness and kurtosis against the model accuracy using linear regression for different weight initializations. Results indicate a positive correlation between network accuracy and skewness of the weight distribution but no affirmative relation between accuracy and kurtosis. This analysis provides further insight into understanding the inner mechanism of neural network training using the shape of weight distribution. Overall, the works in this paper are the first of their kind in incorporating neuroscientific knowledge into the domain of artificial neural network weights.

  • articleNo Access

    MACHINE-PRINTED CHINESE CHARACTER RECOGNITION BASED ON LINEAR REGRESSION

    Segmented machine-printed Chinese characters generally suffer from small distortions and small rotations due to noise and segmentation errors. These phenomena cause many conventional methods, especially those based on directional codes, to be unable to reach very high recognition rates, say above 99%. In this paper, regressional analysis is proposed as a means to overcome these problems. Firstly, thinning is applied to each segmented character, which is enclosed in a proper square box and also filtered for noise reduction beforehand. Secondly, the square thinned character image is divided into 9×9 meshes (blocks), instead of the conventional 8×8, for reasons of the Chinese character's characteristics and also for global feature extraction. Thirdly, line regression is applied, for all black points in each block, to obtain either the value of the slope angle, or a dispersion code which is derived from the sample correlation coefficient after proper transformation. Thus, each block is coded by one of three cases: 'blank', value of slope angle, or 'dispersion'. The peripheral blacks are used for preclassification. Proper scores for matching two characters are designed so that learning and recognition are quite efficient. The objective of designing this optical character recognition system is to get very small misrecognition rates and tolerable rejection rates. Experiments with three fonts, each consisting of 5401 characters, were carried out. The overall rejection rate is 1.25% and the overall misrecognition rate is 0.33%. These are acceptable for most users.

  • articleNo Access

    Novel Critical Gate-Based Circuit Path-Level NBTI-Aware Aging Circuit Degradation Prediction

    With the rapid development of semiconductor technology, chip integration is getting beyond imagination. Aging has become one of the main threats to circuit reliability. In order to develop aging degradation prediction, it is critical to evaluate aging to avoid circuit failures. At present, the research on aging prediction is mainly focused at the transistor and gate levels: at transistor level, the precision is high, but the speed is low; whereas, the gate-level accuracy is not high, but the speed is very fast. In this paper, a path-level aging prediction framework based on the novel critical gate is proposed. The 10-year Negative Bias Temperature Instability (NBTI) aging delay of the critical subcircuit extracted by the novel critical gate is obtained, and the aging delay trend is learnt by using a linear regression model. Then the critical path aging delay can be obtained quickly based on the framework developed by machine learning using the linear regression model. The experimental results of ISCAS’85 and ISCAS’89 benchmark circuits based on 45-nm PTM show that the proposed framework is superior to the existing methods.

  • articleNo Access

    Novel Sparse Feature Regression Method for Traffic Forecasting

    Traffic forecasting is an integral part of modern intelligent transportation systems. Although many techniques have been proposed in the literature to address the problem, most of them focus almost exclusively on forecasting accuracy and ignore other important aspects of the problem. In the paper at hand, a new method for both accurate and fast large-scale traffic forecasting, named “sparse feature regression”, is presented. Initially, a set of carefully selected features is extracted from the available traffic data. Then, some of the initial features are sparsified, namely they are transformed into sets of sparse features. Finally, a linear regression model is designed using the sparse feature set, which is trained by solving an optimization problem using a sparse approximate pseudoinverse as a preconditioner. We evaluated the proposed method by conducting experiments on two real-world traffic datasets, and the experimental results showed that the method presents the best balance between accuracy of predictions and time required for achieving them, in comparison with a set of benchmark models.

  • articleNo Access

    Analysis on Prediction of COVID-19 with Machine Learning Algorithms

    During the pandemic, the most significant reason for the deep concern for COVID-19 is that it spreads from individual to individual through contact or by staying close with the diseased individual. COVID-19 has been understood as an overall pandemic, and a couple of assessments is being performed using various numerical models. Machine Learning (ML) is commonly used in every field. Forecasting systems based on ML have shown their importance in interpreting perioperative effects to accelerate decision-making in the potential course of action. ML models have been used for long to define and prioritize adverse threat variables in several technology domains. To manage forecasting challenges, many prediction approaches have been used extensively. The paper shows the ability of ML models to estimate the amount of forthcoming COVID-19 victims that is now considered a serious threat to civilization. COVID-19 describes the comparative study on ML algorithms for predicting COVID-19, depicts the data to be predicted, and analyses the attributes of COVID-19 cases in different places. It gives an underlying benchmark to exhibit the capability of ML models for future examination.

  • articleNo Access

    Technical Analysis in Investing

    Technical analysis helps investors to better time their entry and exit from financial asset positions. This methodology relies solely on past information on financial assets price and volumes to predict a financial asset’s future price trend. Modern research has established that combined with other sentiment measures such as social media, it can outperform the standard buy and hold strategy. Moreover, it has been documented that novice and professional investors technical analysis in their investing strategy. An experienced investor should combine fundamental analysis and technical analysis for better trading results. Programmers use technical analysis to create algorithmic trading systems that learn and adapt to the changing trading environments and perform trading accordingly without human involvement. There are hundreds of technical tools offered by known trading platforms. investors must use specific tools that fit their trading style and risk adoption. Moreover, different financial assets such as stocks, exchange trade funds (ETFs), cryptocurrency, futures, and commodities demand different sets of tools. Furthermore, investors should use these tools according to the time frame they use for trading. This paper will discuss different technical tools that are used to help traders of different time frames and different financial assets to achieve better returns over the traditional buy and hold strategy.

  • articleNo Access

    Forecasting Models Based on Data Analytics for Predicting Rice Price Volatility: A Case Study of the Sri Lankan Rice Market

    Paddy rice is a staple food that is common among the Sri Lankan populace. However, the frequent price variation of rice has negatively impacted the Sri Lankan economy. This is due to the Sri Lankan rice market lacking the mechanisms to evaluate and predict future rice price variations, often leaving domestic traders and consumers affected by sudden price spikes. This study identifies the quantifiable economic factors that affect the sudden rice price variations and presents a viable mechanism for forecasting Domestic Rice Price (DRP). In addition, it establishes three different regression models to emphasise the relationship of DRP in Sri Lanka with three economic factors: International Rice Price (IRP), International Crude Oil Price (ICOP), and USD Exchange Rate. Further, a time series model is formulated to forecast future variations in DRP while advancing factors that have a significant, but negative, correlative impact on the DRP. The results presented in this study show that the models proposed can be used by relevant food authorities to predict sudden hikes and dips in DRP, allowing them to establish a robust price control system.

  • articleNo Access

    Robust subspace learning method for hyperspectral image classification

    Subspace learning (SL) is an important technology to extract the discriminative features for hyperspectral image (HSI) classification. However, in practical applications, some acquired HSIs are contaminated with considerable noise during the imaging process. In this case, most of existing SL methods yield limited performance for subsequent classification procedure. In this paper, we propose a robust subspace learning (RSL) method, which utilizes a local linear regression and a supervised regularization function simultaneously. To effectively incorporate the spatial information, a local linear regression is used to seek the recovered data from the noisy data under a spatial set. The recovered data not only reduce the noise effect but also include the spectral-spatial information. To utilize the label information, a supervised regularization function based on the idea of Fisher criterion is used to learn a discriminative subspace from the recovered data. To optimize RSL, we develop an efficient iterative algorithm. Extensive experimental results demonstrate that RSL greatly outperforms many existing SL methods when the HSI data contain considerable noise.

  • articleNo Access

    INFERENCE OF LARGE-SCALE GENE REGULATORY NETWORKS USING REGRESSION-BASED NETWORK APPROACH

    The gene regulatory network modeling plays a key role in search for relationships among genes. Many modeling approaches have been introduced to find the causal relationship between genes using time series microarray data. However, they have been suffering from high dimensionality, overfitting, and heavy computation time. Further, the selection of a best model among several possible competing models is not guaranteed that it is the best one. In this study, we propose a simple procedure for constructing large scale gene regulatory networks using a regression-based network approach. We determine the optimal out-degree of network structure by using the sum of squared coefficients which are obtained from all appropriate regression models. Through the simulated data, accuracy of estimation and robustness against noise are computed in order to compare with the vector autoregressive regression model. Our method shows high accuracy and robustness for inferring large-scale gene networks. Also it is applied to Caulobacter crecentus cell cycle data consisting of 1472 genes. It shows that many genes are regulated by two transcription factors, ctrA and gcrA, that are known for global regulators.

  • articleNo Access

    A new LSTM-based gene expression prediction model: L-GEPM

    Molecular biology combined with in silico machine learning and deep learning has facilitated the broad application of gene expression profiles for gene function prediction, optimal crop breeding, disease-related gene discovery, and drug screening. Although the acquisition cost of genome-wide expression profiles has been steadily declining, the requirement generates a compendium of expression profiles using thousands of samples remains high. The Library of Integrated Network-Based Cellular Signatures (LINCS) program used approximately 1000 landmark genes to predict the expression of the remaining target genes by linear regression; however, this approach ignored the nonlinear features influencing gene expression relationships, limiting the accuracy of the experimental results. We herein propose a gene expression prediction model, L-GEPM, based on long short-term memory (LSTM) neural networks, which captures the nonlinear features affecting gene expression and uses learned features to predict the target genes. By comparing and analyzing experimental errors and fitting the effects of different prediction models, the LSTM neural network-based model, L-GEPM, can achieve low error and a superior fitting effect.