World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

Architecture Knowledge Distillation for Evolutionary Generative Adversarial Network

    https://doi.org/10.1142/S0129065725500133Cited by:0 (Source: Crossref)

    Generative Adversarial Networks (GANs) are effective for image generation, but their unstable training limits broader applications. Additionally, neural architecture search (NAS) for GANs with one-shot models often leads to insufficient subnet training, where subnets inherit weights from a supernet without proper optimization, further degrading performance. To address both issues, we propose Architecture Knowledge Distillation for Evolutionary GAN (AKD-EGAN). AKD-EGAN operates in two stages. First, architecture knowledge distillation (AKD) is used during supernet training to efficiently optimize subnetworks and accelerate learning. Second, a multi-objective evolutionary algorithm (MOEA) searches for optimal subnet architectures, ensuring efficiency by considering multiple performance metrics. This approach, combined with a strategy for architecture inheritance, enhances GAN stability and image quality. Experiments show that AKD-EGAN surpasses state-of-the-art methods, achieving a Fréchet Inception Distance (FID) of 7.91 and an Inception Score (IS) of 8.97 on CIFAR-10, along with competitive results on STL-10 (FID: 20.32, IS: 10.06). Code and models will be available at https://github.com/njit-ly/AKD-EGAN.