World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.
https://doi.org/10.1142/S1469026822500055Cited by:2 (Source: Crossref)

A large training sample is prerequisite for the successful training of each deep learning model for image classification. Collecting a large dataset is time-consuming and costly, especially for plants. When a large dataset is not available, the challenge is how to use a small or medium size dataset to train a deep model optimally. To overcome this challenge, a novel model is proposed to use the available small size plant dataset efficiently. This model focuses on data augmentation and aims to improve the learning accuracy by oversampling the dataset through representative image patches. To extract the relevant patches, ORB key points are detected in the training images and then image patches are extracted using an innovative algorithm. The extracted ORB image patches are used for dataset augmentation to avoid overfitting during the training phase. The proposed model is implemented using convolutional neural layers, where its structure is based on ResNet architecture. The proposed model is evaluated on a challenging ACHENY dataset. ACHENY is a Chenopodiaceae plant dataset, comprising 27030 images from 30 classes. The experimental results show that the patch-based strategy outperforms the classification accuracy achieved by traditional deep models by 9%.

Remember to check out the Most Cited Articles!

Check out these titles in artificial intelligence!