Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    A COMPARATIVE MODELING AND OPTIMIZATION OF SURFACE ROUGHNESS IN THE END MILLING OF Al 3003 SUBJECTED TO NON-EQUAL CHANNEL ANGULAR PRESSING (NECAP)

    This paper highlights the surface roughness optimization of a specific material, Al 3003, which has been subjected to the non-equal channel angular pressing (NECAP) process. Considering spindle speed, feed rate, and depth of cut as input variables and surface roughness as an output variable, experiments have been conducted based on the L27 orthogonal array of the Taguchi method. Four prediction models, namely exponential and response surface methodology (RSM) as mathematical models, and artificial neural networks (ANNs) prediction models with different training algorithms (Bayesian Regularization (BR) and Levenberg–Marquardt (LM)), are proposed. Applying effectiveness and performance criteria, the prediction accuracy of the exponential model (90.35%), RSM (93.07%), BR (97.83%), and LM (97.54%) shows that all proposed prediction models are efficient enough. The ANN model trained with BR is found to be the best fit for predicting surface roughness. In order to optimize surface roughness, a newly introduced optimization method called the Intelligible-in-time Logics Algorithm (ILA) is employed. High spindle speed (1000rev/min), low feed rate (100mm/min) and depth of cut (0.5mm) have been the optimum cutting parameter combinations to obtain minimum surface roughness (0.4956μm). The results have been verified by confirmation tests and Particle Swarm Optimization (PSO) method. ILA and PSO predict the same optimum parameter combinations and minimum surface roughness, while ILA performs optimization in less time (114.4s), about 3.5 times faster than PSO. The paper’s findings strongly advocate the application of ILA in machining data optimization.

  • articleNo Access

    Effect of Non-Structural Components on Over-Track Building Vibrations Induced by Train Operations on Concrete Floor

    Train-induced vibration inevitably affects the living standards and work productivity of residents and staff within transit-oriented development (TOD) buildings, especially when trains operate on the concrete floor. Traditional structural calculation models ignore the influence of non-structural components and incorrectly simulate the dynamic characteristics of floor slabs. The influence of non-structural components has been extensively studied, but their impact on train-induced vibrations within the building has not been considered. This study established a numerical model to reveal the effect of partition walls and floor pavement on train-induced vibrations. The result indicated that non-structural components have a significant impact on the distribution of the modal characteristic of the floor slab. The acceleration levels of floor slabs below 50Hz are significantly affected by non-structural components. Besides, non-structural components provide a new pathway for more efficient vibration dissipation below 20Hz within the slab. However, there is a potential for vibration amplification to occur at 25–40Hz due to slab resonance. The findings offer a practical guideline and threshold for TOD building design, thereby contributing to the enhancement of occupant comfort.

  • articleOpen Access

    COMPUTER-AIDED DISEASE CLASSIFICATION BASED ON KPLSKELM ALGORITHM

    Objective: This study aims to explore the use of machine learning algorithms for predicting disease classification. Methods: An integrated algorithm (KPLSKELM) was proposed in this study. The algorithm employed kernel principal component analysis to transform the original data into a high-dimensional feature space, thereby enhancing its linear separability. It used the sparrow search algorithm (SSA) to optimize the weight matrix and parameters of the kernel extreme learning machine (KELM). The algorithm incorporated a Gaussian perturbation search mechanism to refine the population initialization strategy so as to mitigate the issues of poor convergence rate and susceptibility to local optima in the later SSA iterations. Lévy flight perturbations were introduced during the foraging search process of the sparrow population to guide the population in moving appropriate step sizes, thereby increasing the diversity of the spatial search. The proposed method was experimentally validated using a binary classification breast cancer dataset collected by Dr. William H. Wolberg from a Wisconsin hospital in the United States and a multiclass classification dataset of electrocardiographic recordings during childbirth. Multiple metrics were adopted to evaluate the classification performance. Results: The accuracy and F1_score of the KELM model remained relatively low across different percentages of the training set, although a recall of 1.0000 was consistently achieved. Both the SSA-improved KELM and the Lévy-improved SSA-optimized KELM algorithms exhibited better performance in terms of the comprehensive metric F1_score and improved with the increase in the percentage of the training set. The KPLSKELM model outperformed others in all metrics, with accuracy, precision, recall, and F1_score approaching or reaching the highest levels when using 90% of the training set. Conclusions: The proposed method demonstrated excellent performance in various disease prediction tasks, holding high practical application value. It provided a reference for further assisting clinicians in making more precise treatment decisions.

  • articleNo Access

    Performance analysis of groundwater quality index models for predicting water district in Tamil Nadu using regression techniques

    The widespread utilization of groundwater in various sectors, including households for drinking purposes and the agricultural and industrial domains, has elevated its status as an indispensable and crucial natural resource. Groundwater has seen significant changes in both quantity and quality factors. Water Quality Index (WQI), which is dependent on a number of factors, is still a crucial gauge of water quality (WQ) and a key component of efficient water management. If there is an automated method for forecasting WQ, the administration will benefit. The main goal of this project is to develop a machine learning (ML) model to forecast the quality of groundwater in several areas of Tamil Nadu (TN), India. The available dataset encompasses comprehensive data groundwater attributes, encompassing parameters such as pH, electrical conductivity (EC), total hardness (TH), calcium (Ca2+), magnesium (Mg2+), sodium (Na+), bicarbonate (HCO3), nitrate (NO3), sulfate (SO24), and chloride (Cl). In this study, various ML regression algorithms such as linear, least angle, random forest and support vector regressor models and their comparison with the ensemble model (EM) were depicted to predict WQI, and the results were evaluated using performance metrics. It is found that the EM has a lower RMSE in the order of 2.4×106. Further, the predicted WQI values are used to classify the districts of TN.

  • articleNo Access

    MACHINE-LEARNING TECHNIQUES IN MULTIPLE SCLEROSIS PREDICTION USING EEG

    The diagnosis and quantification of Multiple Sclerosis (MS) have typically depended on skilled doctors recognizing visual patterns, such as Magnetic Resonance Imaging (MRI) and Electroencephalography (EEG), which resulted in a costly, time-consuming and non-reproducible process. The application of Machine Learning (ML) in MS diagnosis has been getting a lot of attention in the last few years due to the volume of scientific data, the heterogeneity of disease courses, and the variety of diagnostic methods. The EEG has the capability to detect important changes in the brain’s inherent electrical activity, which are influenced by changes in the neural network connections associated with inflammatory demyelinating and neural damage characteristic of the MS. Utilizing multimodal machine learning over the clinically available data may be a contemporary strategy with amazing potential to facilitate early diagnosis. Considering recent EEG investigations as well as the accessibility to their datasets would be the initial steps in utilizing the ML in MS diagnosis. This paper provides a systematic review of the latest techniques for MS diagnostic based on PRISMA guidelines with the prospect of ML application in their investigations. The goal is to find if EEG could be considered a robust and accurate technique for MS diagnosis. In accordance with PRISMA guidelines, we consider 111 papers. In our review, 404 people are considered including 209 with MS and 195 healthy controls. As a result, we generated an updated investigation looking at the ML approaches as well as utilizing EEG as an accurate, but less often used method to help with early diagnosis of MS. We summarize, analyze, discuss, and synthesize the recently published works, current trends, and open research issues. The review also points out knowledge gaps about the necessity of validating results and addressing constraints such as limited sample number. Our investigation proves that the precision of the supervised strategies such as kNN and SVM is typically higher than that of the unsupervised strategies. On the other hand, by utilizing various techniques such as splitting EEG signal sub-bands, signal windowing, and identifying effective features from the data analysis approaches, we have been able to achieve significant classification accuracy higher than 99%. According to the high degree of accuracy of the results, this approach is becoming the focal point of the research works. The high diagnostic accuracy of the proposed machine learning techniques on the EEG signal analysis shows its potential capacity to become a more widespread procedure as the MS diagnostic techniques. To develop the reliability of these methods, gathering, and analyzing more EEG signals from MS patients and creating more suitable EEG protocols are essential in future studies. In the same way, for the dynamic and online analyses of the EEG signals, in-depth studies are needed to determine more effective EEG protocols and machine-learning techniques.

  • articleNo Access

    Graph Convolutional Network-Guided Mine Gas Concentration Predictor

    Coal mining work has always been a high-risk job, although mining technology is now regularly very mature, many accidents still occur every year in various countries around the world, most of which are due to gas explosions, poisoning, asphyxiation and other accidents. Therefore it is important to monitor and predict both underground mine air quality. In this paper, we use the GCN spatio-temporal graph convolution method based on spectral domain for multivariate time series prediction of underground mine air environment. The correlation of these sequences is learned by a self-attentive mechanism, without a priori graph, and the adjacency matrix with an attention mechanism is created dynamically. The temporal and spatial features are learned by graph Fourier transform and inverse Fourier transform in TC module (temporal convolution) and GC module (graph convolution), respectively. Besides, the corresponding experimental predictions are performed on other public datasets. And a new loss function is designed based on the idea of residuals, which greatly improves the prediction accuracy. In addition, the corresponding experimental predictions were performed on other public datasets. The results show that this model has outstanding prediction ability and high prediction accuracy on most time-series prediction data sets. Through experimental verification, this model has high prediction accuracy for dealing with multivariate time series prediction problems, both for long-term and short-term prediction.

  • articleNo Access

    CONTROL OF SYNCHRONIZATION OF BRAIN DYNAMICS LEADS TO CONTROL OF EPILEPTIC SEIZURES IN RODENTS

    We have designed and implemented an automated, just-in-time stimulation, seizure control method using a seizure prediction method from nonlinear dynamics coupled with deep brain stimulation in the centromedial thalamic nuclei in epileptic rats. A comparison to periodic stimulation, with identical stimulation parameters, was also performed. The two schemes were compared in terms of their efficacy in control of seizures, as well as their effect on synchronization of brain dynamics. The automated just-in-time (JIT) stimulation showed reduction of seizure frequency and duration in 5 of the 6 rats, with significant reduction of seizure frequency (>50%) in 33% of the rats. This constituted a significant improvement over the efficacy of the periodic control scheme in the same animals. Actually, periodic stimulation showed an increase of seizure frequency in 50% of the rats, reduction of seizure frequency in 3 rats and significant reduction in 1 rat. Importantly, successful seizure control was highly correlated with desynchronization of brain dynamics. This study provides initial evidence for the use of closed-loop feedback control systems in epileptic seizures combining methods from seizure prediction and deep brain stimulation.

  • articleNo Access

    BLOOD GLUCOSE LEVEL NEURAL MODEL FOR TYPE 1 DIABETES MELLITUS PATIENTS

    This paper deals with the blood glucose level modeling for Type 1 Diabetes Mellitus (T1DM) patients. The model is developed using a recurrent neural network trained with an extended Kalman filter based algorithm in order to develop an affine model, which captures the nonlinear behavior of the blood glucose metabolism. The goal is to derive a dynamical mathematical model for the T1DM as the response of a patient to meal and subcutaneous insulin infusion. Experimental data given by continuous glucose monitoring system is utilized for identification and for testing the applicability of the proposed scheme to T1DM subjects.

  • articleNo Access

    MULTIVARIATE TIME SERIES MODELING IN A CONNECTIONIST APPROACH

    Multivariate models in the framework of artificial neural network have been constructed for systems where time series data of several variables is known. The models have been tested using computer generated data for the Lorenz and Henon systems. They are found to be robust and give accurate short term predictions. Analysis of the models is able to throw some light on theoretical questions related to multivariate "embedding" and removal of redundancy in the embedding.

  • articleNo Access

    EVALUATION OF THE QUANTITATIVE PREDICTION OF A TREND REVERSAL ON THE JAPANESE STOCK MARKET IN 1999

    In January 1999, the authors published a quantitative prediction that the Nikkei index should recover from its 14-year low in January 1999 and reach ≈20 500 a year later. The purpose of the present paper is to evaluate the performance of this specific prediction as well as the underlying model: the forecast, performed at a time when the Nikkei was at its lowest (as we can now judge in hindsight), has correctly captured the change of trend as well as the quantitative evolution of the Nikkei index since its inception. As the change of trend from sluggish to recovery was estimated quite unlikely by many observers at that time, a Bayesian analysis shows that a skeptical (resp. neutral) Bayesian sees prior belief in our model amplified into a posterior belief 19 times larger (resp. reach the 95% level).

  • articleNo Access

    INCREMENTS OF UNCORRELATED TIME SERIES CAN BE PREDICTED WITH A UNIVERSAL 75% PROBABILITY OF SUCCESS

    We present a simple and general result that the sign of the variations or increments of uncorrelated times series are predictable with a remarkably high success probability of 75% for symmetric sign distributions. The origin of this paradoxical result is explained in details. We also present some tests on synthetic, financial and global temperature time series.

  • articleNo Access

    STRONG EVENTS IN THE SAND-PILE MODEL

    Here is a sand-pile introduced by Bak et al. The system accumulates particles one by one. From time to time it topples. Every toppling initiates an event. The distribution of the events' size follows a power law for all the events except the strongest ones. The fraction of the strongest events does not depend on the system length. The number of particles and their clustering increase before the strongest events.

  • articleNo Access

    REINFORCEMENT LEARNING WITH GOAL-DIRECTED ELIGIBILITY TRACES

    The eligibility trace is the most important mechanism used so far in reinforcement learning to handle delayed reward. Here, we introduce a new kind of eligibility trace, the goal-directed trace, and show that it results in more reliable learning than the conventional trace. In addition, we also propose a new efficient algorithm for solving the goal-directed reinforcement learning problem.

  • articleNo Access

    THE DYNAMICS OF CRIME AND PUNISHMENT

    This article analyzes crime development which is one of the largest threats in today's world, frequently referred to as the war on crime. The criminal commits crimes in his free time (when not in jail) according to a non-stationary Poisson process which accounts for fluctuations. Expected values and variances for crime development are determined. The deterrent effect of imprisonment follows from the amount of time in imprisonment. Each criminal maximizes expected utility defined as expected benefit (from crime) minus expected cost (imprisonment). A first-order differential equation of the criminal's utility-maximizing response to the given punishment policy is then developed. The analysis shows that if imprisonment is absent, criminal activity grows substantially. All else being equal, any equilibrium is unstable (labile), implying growth of criminal activity, unless imprisonment increases sufficiently as a function of criminal activity. This dynamic approach or perspective is quite interesting and has to our knowledge not been presented earlier. The empirical data material for crime intensity and imprisonment for Norway, England and Wales, and the US supports the model. Future crime development is shown to depend strongly on the societally chosen imprisonment policy. The model is intended as a valuable tool for policy makers who can envision arbitrarily sophisticated imprisonment functions and foresee the impact they have on crime development.

  • articleNo Access

    Prediction of collective opinion in consensus formation

    In the consensus formation dynamics, the effect of leaders and interventions have been widely studied for it has many applications such as in politics and commerce. However, the problem is how to know if it is necessary for one to make an intervention. In this paper, we theoretically propose a method for predicting the tendency and final state of collective opinion. By giving each agent a conviction ci which measures the ability to insist on his opinion, we present an opinion formation model in which agents with high convictions naturally show up properties of the opinion leaders. Results reveal that, although each agent initially gets an opinion evenly distributed in the range [-1, 1], the collective opinion of the steady-state may deviate to the positive or negative direction because of the initial bias of the leaders' opinions. We further get the correlation coefficient of the linear relationship between the collective opinion and the initial bias according to both the experimental and theoretical analysis. Thus, we could predict the final state at the very beginning of the dynamic only if we get the opinions of a small portion of the population. The prediction would afford us more time and opportunities to make reactions and interventions.

  • articleNo Access

    Prediction of industrial electricity consumption based on grey cluster weighted Markov model

    Accurate prediction of industrial electricity consumption is not only beneficial to maintaining the steady development of the economy but also to conserving energy. To improve the prediction accuracy of industrial electricity consumption, a grey cluster weighted Markov model is proposed. It is applied to predict the industrial electricity consumption in four different regions in China. The prediction results are compared with the traditional discrete grey prediction model, which shows that the present model is more effective in these aspects of prediction accuracy, stability and extensibility. The research can provide theoretical references for the “West-East electricity transmission project” in China.

  • articleNo Access

    Evidence of Discrete Scale Invariance in DLA and Time-to-Failure by Canonical Averaging

    Discrete scale invariance, which corresponds to a partial breaking of the scaling symmetry, is reflected in the existence of a hierarchy of characteristic scales l0,l0λ,l0λ2,…, where λ is a preferred scaling ratio and l0 a microscopic cut-off. Signatures of discrete scale invariance have recently been found in a variety of systems ranging from rupture, earthquakes, Laplacian growth phenomena, "animals" in percolation to financial market crashes. We believe it to be a quite general, albeit subtle phenomenon. Indeed, the practical problem in uncovering an underlying discrete scale invariance is that standard ensemble averaging procedures destroy it as if it was pure noise. This is due to the fact, that while λ only depends on the underlying physics, l0 on the contrary is realization-dependent. Here, we adapt and implement a novel so-called "canonical" averaging scheme which re-sets the l0 of different realizations to approximately the same value. The method is based on the determination of a realization-dependent effective critical point obtained from, e.g., a maximum susceptibility criterion. We demonstrate the method on diffusion limited aggregation and a model of rupture.

  • articleNo Access

    PREDICTING FINANCIAL DISTRESS OF CHINESE LISTED COMPANIES USING ROUGH SET THEORY AND SUPPORT VECTOR MACHINE

    Effectively predicting corporate financial distress is an important and challenging issue for companies. The research aims at predicting financial distress using the integrated model of rough set theory (RST) and support vector machine (SVM), in order to find a better early warning method and enhance the prediction accuracy. After several comparative experiments with the dataset of Chinese listed companies, rough set theory is proved to be an effective approach for reducing redundant information. Our results indicate that the SVM performs better than the BPNN when they are used for corporate financial distress prediction.

  • articleNo Access

    Predicting Retweeting Behavior Based on BPNN in Emergency Incidents

    Emergency incidents can trigger heated discussions on microblogging platforms, and a great number of tweets related to emergency incidents are retweeted by users. Consequently, social media big data related to the emergency incidents is generated from various social media platforms, which can be used to predict users’ retweeting behavior. In this paper, the characteristics of individuals’ retweeting behaviors in emergency incidents are analyzed, and then 11 important characteristics are extracted from recipient characteristics, retweeter characteristics, tweet content characteristics, and external media coverage. A back propagation neural network (BPNN) model called PRBBP is used to predict retweeting behavior in such emergency incidents. Based on PRBBP, an algorithm called PRABP is proposed to predict the number of retweets in emergency incidents. The experiments are performed on a large-scale dataset crawled from Sina weibo. The simulation results show that both the PRBBP model and the PRABP algorithm proposed by this paper have excellent predictive performance.

  • articleNo Access

    Modeling and predicting the tensile strength of poly (lactic acid)/graphene nanocomposites by using support vector regression

    According to an experimental dataset under different process parameters, support vector regression (SVR) combined with particle swarm optimization (PSO) for its parameter optimization was employed to establish a mathematical model for prediction of the tensile strength of poly (lactic acid) (PLA)/graphene nanocomposites. Four variables, while graphene loading, temperature, time and speed, were employed as input variables, while tensile strength acted as output variable. Using leave-one-out cross validation test of 30 samples, the maximum absolute percentage error does not exceed 1.5%, the mean absolute percentage error (MAPE) is only 0.295% and the correlation coefficient (R2) is as high as 0.99. Compared with the results of response surface methodology (RSM) model, it is shown that the estimated errors by SVR are smaller than those achieved by RSM. It revealed that the generalization ability of SVR is superior to that of RSM model. Meanwhile, multifactor analysis is adopted for investigation on significances of each experimental factor and their influences on the tensile strength of PLA/graphene nanocomposites. This study suggests that the SVR model can provide important theoretical and practical guide to design the experiment, and control the intensity of the tensile strength of PLA/graphene nanocomposites via rational process parameters.