Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Ensemble learning systems could lower down the risk of overfitting that often appears in a single learning model. Different to those ensemble learning approaches by re-sampling, negative correlation learning trains all learners in an ensemble simultaneously and cooperatively. However, overfitting had sometimes been observed in negative correlation learning. Two error bounds are therefore introduced into negative correlation learning for preventing overfitting. One is the upper bound of error output (UBEO) which divides the training data into two groups based on the distances between the data and the formed decision boundary. The other is the lower bound of error rate (LBER) which is set as a learning switch. Before the performance measured by error rates is higher than LBER, negative correlation learning is applied on the whole training set. As soon as the performance is lower than LBER, negative correlation learning will only be applied to the group of data whose distances to the current decision boundary are within the range of UBEO. The other group of data outside of this range will not be learned anymore. Further learning on the data points in the later group would make the learned decision boundary too complex to classify the unseen data well. Experimental results would explore how LBER and UBEO would lead negative correlation learning towards a robust decision boundary.