World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

CORRELATION ANALYSIS OF PERFORMANCE METRICS FOR CLASSIFIER

    https://doi.org/10.1142/9789814619998_0081Cited by:11 (Source: Crossref)
    Abstract:

    The correct selection of performance metrics is one of the most key issues in evaluating classifier's performance. Although many performance metrics have been proposed and used in machine learning community, there is not any common conclusions among practitioners regarding which metric to choose for evaluating a classifier's performance. In this paper, we attempt to investigate the potential relationship among some common used performance metrics. Based on definitions, We first classify seven most widely performance metrics into three groups, namely threshold metrics, rank metrics, and probability metrics. Then, we focus on using Pearson linear correlation and Spearman rank correlation to investigate the relationship among these metrics. Experimental results show the reasonableness of classifying seven common used metrics into three groups. This can be useful for helping practitioners enhance understanding about the different relationships and groupings among the performance metrics.