Sparse Deep Neural Network Optimization for Embedded Intelligence
Abstract
Deep neural networks become more popular as its ability to solve very complex pattern recognition problems. However, deep neural networks often need massive computational and memory resources, which is main reason resulting them to be difficult efficiently and entirely running on embedded platforms. This work addresses this problem by saving the computational and memory requirements of deep neural networks by proposing a variance reduced (VR)-based optimization with regularization techniques to compress the requirements of memory of models within fast training process. It is shown theoretically and experimentally that sparsity-inducing regularization can be effectively worked with the VR-based optimization whereby in the optimizer the behaviors of the stochastic element is controlled by a hyper-parameter to solve non-convex problems.
Remember to check out the Most Cited Articles! |
---|
Check out Notable Titles in Artificial Intelligence. |