Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Cardiovascular diseases have become one of the world’s leading causes of death today. Several decision-making systems have been developed with computer-aided support to help the cardiologists in detecting heart disease and thereby minimizing the mortality rate. This paper uses an unexplored sub-domain related to textural features for classifying phonocardiogram (PCG) as normal or abnormal using Grey Level Co-occurrence Matrix (GLCM). The matrix has been applied to extract features from spectrogram of the PCG signals taken from the Physionet 2016 benchmark dataset. Random Forest, Support Vector Machine, Neural Network, and XGBoost have been applied to assess the status of the human heart using PCG signal spectrogram. The result of GLCM is compared with the two other textural feature extraction methods, viz. structural co-occurrence matrix (SCM), and local binary patterns (LBP). Experimental results have proved that applying machine learning model to classify PCG signal on the dataset where GLCM has extracted the feature-set, the accuracy attained is greater as compared to its peer approaches. Thus, this methodology can go a long way to help the medical specialists in precisely and accurately assessing the heart condition of a patient.
A great increase in the number of cardiovascular cases has been a cause of serious concern for the medical experts all over the world today. In order to achieve valuable risk stratification for patients, early prediction of heart health can benefit specialists to make effective decisions. Heart sound signals help to know about the condition of heart of a patient. Motivated by the success of cepstral features in speech signal classification, authors have used here three different cepstral features, viz. Mel-frequency cepstral coefficients (MFCCs), gammatone frequency cepstral coefficients (GFCCs), and Mel-spectrogram for classifying phonocardiogram into normal and abnormal. Existing research has explored only MFCCs and Mel-feature set extensively for classifying the phonocardiogram. However, in this work, the authors have used a fusion of GFCCs with MFCCs and Mel-spectrogram, and achieved a better accuracy score of 0.96 with sensitivity and specificity scores as 0.91 and 0.98, respectively. The proposed model has been validated on the publicly available benchmark dataset PhysioNet 2016.