World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

ON THE DESIGN OF A TREE CLASSIFIER AND ITS APPLICATON TO SPEECH RECOGNITION

    https://doi.org/10.1142/S0218001491000405Cited by:1 (Source: Crossref)

    A new algorithm for constructing entropy reduction based decision tree classifier in two steps for large reference-class sets is proposed. The d-dimensional feature space is first mapped onto a line thus allowing a dynamic choice of features and then the resultant linear space is partitioned into two sections while minimizing the average system entropy. Classes of each section, again considered as a collection of references classes in a d-dimensional feature space, can be further split in a similar manner should the collection still be considered excessively large, thus forming a binary decision tree of nodes with overlapping members. The advantage of using such a classifier is that the need to match a test feature vector against all the references exhaustively is avoided. We demonstrate in this paper that discrete syllable recognition with dynamic programming equipped with such a classifier can reduce the recognition time by a factor of 40 to 100. The recognition speed is one third to one half of that using hidden Markov models (HMM), while the recognition rate is somewhat higher. The theory is considerably simpler than that of HMM but the decision tree can occupy a lot of memory space.