Search name | Searched On | Run search |
---|---|---|
Keyword: Classification (627) | 20 Mar 2025 | Run |
You do not have any saved searches
In the recent past, numerous frameworks have been designed to take decision support from samples for analyzing ECG signal data classification with wearable devices to prevent health risks in sports. As various frameworks permit a distinctive set of results, assessing the framework’s classification control in examination with other order frameworks or in correlation with human specialists is hard. The order precision is generally utilized as a measure of classification execution in this research. A novel hybrid Improved Monkey-based search (IMS) and support vector machine (SVM) technique have been designed and developed in this research for the health risk identification in ECGs. It incorporates handling of noise, extraction of signals, rule-based beat classification, and sliding window arrangement using a wearable device for the sportsperson. It can be executed continuously and can give clarifications to the analytic choices, and maximum scores have been acquired in terms of sensitivity and specificity (98.1% and 98.5% correspondingly using collective accuracy gross information, and 98.8% using aggregate average statistics, which has been shown in this research. Finally, experimental analysis has exposed that the hybrid Improved Monkey-based search (IMS) and support vector machine (SVM) technique achieve high precision (99.01%) in analyses of the heart rate for the sportsperson.
Breast cancer (BrC) is one of the most common causes of death among women worldwide. Images of the breast (mammography or ultrasound) may show an anomaly that represents early indicators of BrC. However, accurate breast image interpretation necessitates labor-intensive procedures and highly skilled medical professionals. As a second opinion for the physician, deep learning (DL) tools can be useful for the diagnosis and classification of malignant and benign lesions. However, due to the lack of interpretability of DL algorithms, it is not easy to understand by experts as to how to predict a label. In this work, we proposed multitask U-Net Saliency estimation and DL model-based breast lesion segmentation and classification using ultrasound images. A new contrast enhancement technique is proposed to improve the quality of original images. After that, a new technique was proposed called UNET-Saliency map for the segmentation of breast lesions. Simultaneously, a MobileNetV2 deep model is fine-tuned with additional residual blocks and trained from scratch using original and enhanced images. The purpose of additional blocks is to reduce the number of parameters and better learning of ultrasound images. Training is performed from scratch and extracted features from the deeper layers of both models. In the later step, a new cross-entropy controlled sine-cosine algorithm is developed and selected best features. The main purpose of this step is the reduction of irrelevant features for the classification phase. The selected features are fused in the next step by employing a serial-based Manhattan Distance (SbMD) approach and classified the resultant vector using machine learning classifiers. The results indicate that a wide neural network (W-NN) obtained the highest accuracy of 98.9% and sensitivity rate of 98.70% on the selected breast ultrasound image dataset. The comparison of the proposed method accuracy is conducted with state-of-the-art (SoArt) techniques which show the improved performance.
Cryptocurrency (CRP) has grown in popularity over the last decade. Since there is no central body to control the Bitcoin (BTC) markets, they are extremely volatile. However, several similar variables that cause price volatility in traditional markets also affect cryptocurrencies. Several bubble phases have taken place in BTC prices, mostly during the years 2013 and 2017. Other digital currencies of primary importance, such as Ethereum and Litecoin, also exhibited several bubble phases. Among traditional methods of analysis for this volatile market, only a small number of studies focused on Machine Learning (ML) techniques. The present study objective is to get an in-depth knowledge of the time series properties of CRP data and combine volatility models with ML models. In the hybrid method, we first apply the Nonlinear Generalized Autoregressive Conditional Heteroskedasticity (NGARCH) model with asymmetric distribution to calculate standardized returns, then forecast the UP and DOWN movement of standardized returns through ML models such as Logistic Regression (LR), Linear Discrimination Analysis (LDA), Quadratic Discrimination Analysis (QDA), Artificial Neural Networks (ANNs), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM). The findings show that the proposed hybrid approach of time series models and ML accurately predicts prices; specifically, the KNN model reveals that the scheme can be applicable to CRP market prediction. It is deduced that ML methods combined with volatility models have the tendency to better forecast this volatile market.
Black gram crop belongs to the Fabaceae family and its scientific name is Vigna Mungo.It has high nutritional content, improves the fertility of the soil, and provides atmospheric nitrogen fixation in the soil. The quality of the black gram crop is degraded by diseases such as Yellow mosaic, Anthracnose, Powdery Mildew, and Leaf Crinkle which causes economic loss to farmers and degraded production. The agriculture sector needs to classify plant nutrient deficiencies in order to increase crop quality and yield. In order to handle a variety of difficult challenges, computer vision and deep learning technologies play a crucial role in the agricultural and biological sectors. The typical diagnostic procedure involves a pathologist visiting the site and inspecting each plant. However, manually crop disease assessment is limited due to lesser accuracy and limited access of personnel. To address these problems, it is necessary to develop automated methods that can quickly identify and classify a wide range of plant diseases. In this paper, black gram disease classifications are done through a deep ensemble model with optimal training and the procedure of this technique is as follows: Initially, the input dataset is processed to increase its size via data augmentation. Here, the processes like shifting, rotation, and shearing take place. Then, the model starts with the noise removal of images using median filtering. Subsequent to the preprocessing, segmentation takes place via the proposed deep joint segmentation model to determine the ROI and non-ROI regions. The next process is the extraction of the feature set that includes the features like improved multi-texton-based features, shape-based features, color-based features, and local Gabor X-OR pattern features. The model combines the classifiers like Deep Belief Networks, Recurrent Neural Networks, and Convolutional Neural Networks. For tuning the optimal weights of the model, a new algorithm termed swarm intelligence-based Self-Improved Dwarf Mongoose Optimization algorithm (SIDMO) is introduced. Over the past two decades, nature-based metaheuristic algorithms have gained more popularity because of their ability to solve various global optimization problems with optimal solutions. This training model ensures the enhancement of classification accuracy. The accuracy of the SIDMO, which is around 94.82%, is substantially higher than that of the existing models, which are FPA==88.86%, SSOA==88.99%, GOA==85.84%, SMA==85.11%, SRSR==85.32%, and DMOA==88.99%, respectively.
The benefits of using an automatic dietary assessment system for accompanying diabetes patients and prediabetic persons to control the risk factor also referred to as the obesity “pandemic” are now widely proven and accepted. However, there is no universal solution as eating habits of people are dependent on context and culture. This project is the cornerstone for future works of researchers and health professionals in the field of automatic dietary assessment of Mauritian dishes. We propose a process to produce a food dataset for Mauritian dishes using the Generative Adversarial Network (GAN) and a fine Convolutional Neural Network (CNN) model for identifying Mauritian food dishes. The outputs and findings of this research can be used in the process of automatic calorie calculation and food recommendation, primarily using ubiquitous devices like mobile phones via mobile applications. Using the Adam optimizer with carefully fixed hyper-parameters, we achieved an Accuracy of 95.66% and Loss of 3.5% as concerns the recognition task.
For different applications, various handcrafted descriptors are reported in the literature. Their results are satisfactory concerning the application they were proposed. Furthermore in the literature, the comparative study discusses these handcrafted descriptors. The main drawback which was noticed in these studies is the restriction of implementation only to single application. This work fills this gap and provides the comparative study of 10 handcrafted for two different applications and these are face recognition (FR) and palmprint recognition (PR). The 10 handcrafted descriptors which are analyzed are local binary pattern (LBP), horizontal elliptical LBP (HELBP), VELBP, robust LBP (RLBP), local phase quantization (LPQ), multiscale block zigzag LBP (MB-ZZLBP), neighborhood mean LBP (NM-LBP), directional threshold LBP (DT-LBP), median robust extended LBP based on neighborhood intensity (MRELBP-NI) and radial difference LBP (RD-LBP). The global feature extraction is performed for all 10 descriptors. PCA and SVMs are used for compaction and matching. Results are done on ORL, GT, IITD-TP and TP. The first two are face datasets and the latter two are palmprint datasets. In face datasets, the descriptor which attains the best recognition accuracy is DT-LBP and in palmprint datasets, it is MB-ZZLBP which surpass the accuracy of the other compared methods.
Let HH be the 1616-dimensional nontrivial semisimple Hopf algebra Ha:yHa:y appeared in Kashina’s work [Classification of semisimple Hopf algebras of dimension 16, J. Algebra232(2) (2000) 617–663]. We obtain all simple Yetter–Drinfeld modules over HH and then determine all finite-dimensional Nichols algebras satisfying ℬ(N)≅⊗i∈Iℬ(Ni), where N=⊕i∈INi, each Ni is a simple object in HH𝒴𝒟. Moreover, for Nichols algebras satisfying ℬ(N)≇⊗i∈Iℬ(Ni), we list some infinite dimensional Nichols algebras ℬ(N1⊕N2) and obtain finite dimensional Nichols algebras of diagonal type that are either super type or A2×A2. Finally, we describe some liftings of those ℬ(N) over H.
In the new era of digital communications, cyberbullying is a significant concern for society. Cyberbullying can negatively impact stakeholders and can vary from psychological to pathological, such as self-isolation, depression and anxiety potentially leading to suicide. Hence, detecting any act of cyberbullying in an automated manner will be helpful for stakeholders to prevent any unfortunate results from the victim’s perspective. Data-driven approaches, such as machine learning (ML), particularly deep learning (DL), have shown promising results. However, the meta-analysis shows that ML approaches, particularly DL, have not been extensively studied for the Arabic text classification of cyberbullying. Therefore, in this study, we conduct a performance evaluation and comparison for various DL algorithms (LSTM, GRU, LSTM-ATT, CNN-BLSTM, CNN-LSTM and LSTM-TCN) on different datasets of Arabic cyberbullying to obtain more precise and dependable findings. As a result of the models’ evaluation, a hybrid DL model is proposed that combines the best characteristics of the baseline models CNN, BLSTM and GRU for identifying cyberbullying. The proposed hybrid model improves the accuracy of all the studied datasets and can be integrated into different social media sites to automatically detect cyberbullying from Arabic social datasets. It has the potential to significantly reduce cyberbullying. The application of DL to cyberbullying detection problems within Arabic text classification can be considered a novel approach due to the complexity of the problem and the tedious process involved, besides the scarcity of relevant research studies.
The prevalence of student dropout in academic settings is a serious issue that affects individuals and society as a whole. Timely intervention and support can be provided to such students if we get an accurate prediction of student performance. However, class imbalance and data complexity in education data are major challenges for traditional predictive analytics. Our research focusses on utilising machine learning techniques to predict student performance while handling imbalanced datasets. To address the imbalanced class problem, we employed both oversampling and undersampling techniques in our decision tree ensemble methods for the risk classification of prospective students. The effectiveness of classifiers was evaluated by varying the sizes of the ensembles and the oversampling and undersampling ratios. Additionally, we conducted experiments to integrate the feature selection processes with the best ensemble classifiers to further enhance the prediction. Based on the extensive experimentation, we concluded that ensemble methods such as Random Forest, Bagging, and Random Undersampling Boosting perform well in terms of performance measures such as Recall, Precision, F1-score, Area Under the Receiver Operating Characteristic Curve, and Geometric Mean. The F1-score of 0.849 produced by the Random Undersampling Boost classifier in conjunction with the Least Absolute Shrinkage and Selection Operator feature selection method indicates that this ensemble produces the best results.
Recent advancements in structural health monitoring have been significantly driven by the integration of artificial intelligence technologies. This study employs a combination of supervised machine learning techniques, including classification and regression, to accurately detect and localize local thickness reduction defects in a cantilever beams. Our approach utilizes a dataset of 100 signals, comprising 84 defective and 16 healthy states of the beam’s free side displacement, for training machine learning models. Signal processing involves the application of five distinct mode decomposition methods to decompose each signal into its Intrinsic Mode Functions (IMFs). Additionally, four dimensionality reduction methods have been used to reduce the dimensions of the signals. Feature extraction is performed using seven frequency domain, two time domain, and three time–frequency domain methods to capture pertinent patterns and characteristics within the signals. We evaluate the performance of five classification methods and 10 regression methods to predict the location of defects. Our results demonstrate the efficacy of combining specific feature extraction and dimensionality reduction techniques with classification methods, achieving multi-class classification accuracies of up to 99.55%. Moreover, regression methods, particularly the Bayesian ridge regressor, exhibit high accuracy in predicting defect locations, with an R2 value of 99.94% and minimal Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) values. This study highlights the potential of integrating regression and classification-based machine learning approaches for precise damage detection and localization in beam structures.
In this paper, we study Helly graphs of finite combinatorial dimension, i.e. whose injective hull is finite-dimensional. We describe very simple fine simplicial subdivisions of the injective hull of a Helly graph, following work of Lang. We also give a very explicit simplicial model of the injective hull of a Helly graph, in terms of cliques which are intersections of balls.
We use these subdivisions to prove that any automorphism of a Helly graph with finite combinatorial dimension is either elliptic or hyperbolic. Moreover, every such hyperbolic automorphism has an axis in an appropriate Helly subdivision, and its translation length is rational with uniformly bounded denominator.
This work compares the performance of different algorithms — quantum Fourier transform, Gaussian–Newton method, hyperfast, metropolis-adjusted Langevin algorithm, and nonparametric classification and regression trees — for the classification of fetal health states from FHR signals. In the conducted research, the effectiveness of each algorithm was measured using confusion matrices, which gave information about class precision, recall, and total accuracy in three classes: Normal, Suspect, and Pathological. The QFT algorithm gives an overall accuracy of 90%, where it is highly reliable in recognizing Normal (94% F1-score) and Pathological states (91% F1-score), but performs poorly regarding the Suspect cases, at 58% F1-score. On the other hand, using the GNM method gives an accuracy of 88%, whereby it performed well on Normal cases, at 93% F1-score, and poor performance with Suspect, at 50% F1-score, and Pathological classifications, at 82% F1-score. The hyperfast algorithm yielded an accuracy of 89%, thus performing well on Normal classifications with an F1-score of 93%, but less well on the Suspect states with an F1-score of 56%. The MALA algorithm outperformed all other algorithms tested in this study, giving an overall accuracy of 91% and adequately classifying Normal, Suspect, and Pathological states with corresponding F1-scores of 94%, 63%, and 90%, respectively; therefore, the algorithm is quite robust and reliable for fetal health monitoring. The NCART algorithm achieved an accuracy of 89%, thus showing great capability for classification in Normal cases with 94% F1-score and in Pathological cases with 88% F1-score; this is moderate for Suspect cases with 53% F1-score. Overall, while all algorithms exhibit potential for fetal health classification, MALA stands out as the most effective, offering reliable classification across all health states. These findings highlight the need for further refinement, particularly in enhancing the detection of Suspect conditions, to ensure comprehensive and accurate fetal health monitoring.
The widespread utilization of groundwater in various sectors, including households for drinking purposes and the agricultural and industrial domains, has elevated its status as an indispensable and crucial natural resource. Groundwater has seen significant changes in both quantity and quality factors. Water Quality Index (WQI), which is dependent on a number of factors, is still a crucial gauge of water quality (WQ) and a key component of efficient water management. If there is an automated method for forecasting WQ, the administration will benefit. The main goal of this project is to develop a machine learning (ML) model to forecast the quality of groundwater in several areas of Tamil Nadu (TN), India. The available dataset encompasses comprehensive data groundwater attributes, encompassing parameters such as pH, electrical conductivity (EC), total hardness (TH), calcium (Ca2+), magnesium (Mg2+), sodium (Na+), bicarbonate (HCO3−), nitrate (NO3−), sulfate (SO2−4), and chloride (Cl−). In this study, various ML regression algorithms such as linear, least angle, random forest and support vector regressor models and their comparison with the ensemble model (EM) were depicted to predict WQI, and the results were evaluated using performance metrics. It is found that the EM has a lower RMSE in the order of 2.4×10−6. Further, the predicted WQI values are used to classify the districts of TN.
In this work, we hybridize the Genetic Quantum Algorithm with the Support Vector Machines classifier for gene selection and classification of high dimensional Microarray Data. We named our algorithm GQASVM. Its purpose is to identify a small subset of genes that could be used to separate two classes of samples with high accuracy. A comparison of the approach with different methods of literature, in particular GASVM and PSOSVM [2], was realized on six different datasets issued of microarray experiments dealing with cancer (leukemia, breast, colon, ovarian, prostate, and lung) and available on Web. The experiments clearified the very good performances of the method. The first contribution shows that the algorithm GQASVM is able to find genes of interest and improve the classification on a meaningful way. The second important contribution consists in the actual discovery of new and challenging results on datasets used.
The present paper studies a particular collection of classification problems, i.e., the classification of recursive predicates and languages, for arriving at a deeper understanding of what classification really is. In particular, the classification of predicates and languages is compared with the classification of arbitrary recursive functions and with their learnability. The investigation undertaken is refined by introducing classification within a resource bound resulting in a new hierarchy. Furthermore, a formalization of multi-classification is presented and completely characterized in terms of standard classification. Additionally, consistent classification is introduced and compared with both resource bounded classification and standard classification. Finally, the classification of families of languages that have attracted attention in learning theory is studied, too.
We investigate the combination of the Kohonen networks with the kernel methods in the context of classification. We use the idea of kernel functions to handle products of vectors of arbitrary dimension. We indicate how to build Kohonen networks with robust classification performance by transformation of the original data vectors into a possibly infinite dimensional space. The resulting Kohonen networks preserve a non-Euclidean neighborhood structure of the input space that fits the properties of the data. We show how to optimize the transformation of the data vectors in order to obtain higher classification performance. We compare the kernel-Kohonen networks with the regular Kohonen networks in the context of a classification task.
This paper defines the truncated normalized max product operation for the transformation ofstates of a network and provides a method for solving a set of equations based on this operation. The operation serves as the transformation for the set of fully connected units in a recurrent network that otherwise might consist of linear threshold units. Component values of the state vector and ouputs of the units take on the values in the set {0, 0.1, …, 0.9, 1}. The result is a much larger state space given a particular number of units and size of connection matrix than for a network based on threshold units. Since the operation defined here can form the basis of transformations in a recurrent network with a finite number of states, fixed points or cycles are possible and the network based on this operation for transformations can be used as an associative memory or pattern classifier with fixed points taking on the role of prototypes. Discrete fully recurrent networks have proven themselves to be very useful as associative memories and as classifiers. However they are often based on units that have binary states. The effect of this is that the data to be processed consisting of vectors in ℜn have to be converted to vectors in {0, 1}m with m much larger than n since binary encoding based on positional notation is not feasible. This implies a large increase in the number of components. The effect can be lessened by allowing more states for each unit in our network. The network proposed demonstrates those properties that are desirable in an associative memory very well as the simulations show.
Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.
We focus on the problem of prediction with confidence and describe a recently developed learning algorithm called transductive confidence machine for making qualified region predictions. Its main advantage, in comparison with other classifiers, is that it is well-calibrated, with number of prediction errors strictly controlled by a given predefined confidence level. We apply the transductive confidence machine to the problems of acute leukaemia and ovarian cancer prediction using microarray and proteomics pattern diagnostics, respectively. We demonstrate that the algorithm performs well, yielding well-calibrated and informative predictions whilst maintaining a high level of accuracy.
Fuzzy decision trees are powerful, top-down, hierarchical search methodology to extract human interpretable classification rules. However, they are often criticized to result in poor learning accuracy. In this paper, we propose Neuro-Fuzzy Decision Trees (N-FDTs); a fuzzy decision tree structure with neural like parameter adaptation strategy. In the forward cycle, we construct fuzzy decision trees using any of the standard induction algorithms like fuzzy ID3. In the feedback cycle, parameters of fuzzy decision trees have been adapted using stochastic gradient descent algorithm by traversing back from leaf to root nodes. With this strategy, during the parameter adaptation stage, we keep the hierarchical structure of fuzzy decision trees intact. The proposed approach of applying backpropagation algorithm directly on the structure of fuzzy decision trees improves its learning accuracy without compromising the comprehensibility (interpretability). The proposed methodology has been validated using computational experiments on real-world datasets.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.