Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The monitoring and maintenance of grid equipment have become increasingly crucial due to the continual progress in smart grid technology. Efficient identification technology for grid equipment is crucial for enabling equipment status monitoring and fault diagnosis, directly influencing the operational stability of the grid concerning precision and timely functionality. Nevertheless, the reliance of current image recognition methods on intricate models and extensive computational resources poses implementation challenges in resource-limited field environments, thereby restricting their use in operations such as drone-based power line inspections. In response to this obstacle, the paper introduces a streamlined identification approach for grid equipment through model compression. This method aims to uphold recognition precision while minimizing the computational workload and storage demands of the model, making it well-suited for integration into drone-based power line inspections. Introducing a target recognition network, this method integrates tailored multi-scale information for grid equipment and embeds an attention mechanism within the network to enhance the model’s capacity for identifying crucial features. Expanding on this approach, model compression techniques are utilized to condense the trained model. This process maintains accuracy by removing redundant weights, thereby shrinking the model’s size and computational complexity, ultimately achieving a lightweight network.
Model pruning is one of the main methods of deep neural network model compression. However, the existing model pruning methods are inefficient, there is still a lot of redundancy in the network, and the pruning has a great impact on the accuracy. In addition, the traditional pruning process usually needs to fine-tune the network to restore accuracy, and fine-tuning has a limited effect on accuracy recovery, so it is difficult to achieve a high level of accuracy. In this paper, we propose a new neural network pruning framework: the channel sparsity is realized by introducing a scale factor, and the sparse network is pruned by setting a global threshold, which greatly improves the efficiency of pruning. After pruning, this paper proposes a rewind method to restore the accuracy, that is, save the weight after training, and then reload it on the network for training to restore the accuracy. In addition, we also study the best rewind point of the three networks. The experimental results show that our method significantly reduces the number of parameters and FLOPs without affecting or even improving the accuracy, and the rewind method proposed by us achieves a better accuracy recovery effect than fine-tuning. At the same time, we find that the epoch with the highest accuracy is the best rewind point, and the accuracy is the highest after saving its corresponding weight and retraining the model.
Counting the number of people in a specific area is crucial in maintaining proper crowd safety and management especially in highly-congested indoor scenarios. Recent convolutional neural network (CNN) approaches integrate auxiliary sub-networks to increase the accuracy of the model in estimating crowd size. However, these models require large computational costs due to additional calculations, resulting in an impractically slow inference speed for real-world applications. In this paper, we propose a fast, efficient, and robust crowd counting model called Condensed Network or ConNet. We utilize a composite technique composed of multiple compression methods to reduce the number of parameters of our proposed model. ConNet attains counting accuracy on par with state-of-the-art crowd counting methods on benchmark datasets featuring indoor scenes, while significantly reducing parameter count and increasing inference speed. Moreover, ConNet still performs accurately even with extreme changes in lighting conditions, image resolutions, and camera orientations. Our smallest model ConNet-04 has 61.0× less parameters and is at most 9.0× faster than the baseline approach. Our code and trained models are publicly available at https://github.com/mikatej/ConNet.
Since Deep Neural Networks (DNNs) have been more and more widely used in safety-critical Intelligent System (IS) applications, the robustness of DNNs becomes a great concern in IS design. Due to the vulnerability of DNN models, adversarial examples generated by malicious attacks may result in disasters. Although there are plenty of defense methods for these adversarial attacks, existing methods can only resist special adversarial attacks. Meanwhile, the accuracy of existing methods degrades dramatically when they process nature examples. To address this problem, we propose an effective Cooperative Defensive Architecture (CDA) that can enhance the robustness of IS devices by integrating heterogeneous base classifiers. Because of the parallel mechanism in ensemble learning, the compressed heterogeneous base classifiers do not increase the prediction time on device. Comprehensive experimental results show that the modified DNNs by our approach cannot only resist adversarial examples more effectively than original model, but also achieve a high accuracy when they process nature examples.
Deep neural networks become more popular as its ability to solve very complex pattern recognition problems. However, deep neural networks often need massive computational and memory resources, which is main reason resulting them to be difficult efficiently and entirely running on embedded platforms. This work addresses this problem by saving the computational and memory requirements of deep neural networks by proposing a variance reduced (VR)-based optimization with regularization techniques to compress the requirements of memory of models within fast training process. It is shown theoretically and experimentally that sparsity-inducing regularization can be effectively worked with the VR-based optimization whereby in the optimizer the behaviors of the stochastic element is controlled by a hyper-parameter to solve non-convex problems.
In recent years, the rapid development of mobile devices and embedded system raises a demand for intelligent models to address increasingly complicated problems. However, the complexity of the structure and extensive parameters press significantly on efficiency, storage space, and energy consumption. Additionally, the explosive growth of tasks with enormous model structures and parameters makes it impossible to compress models manually. Thus, a standardized and effective model compression solution achieving lightweight neural networks is established as an urgent demand by the industry. Accordingly, Dynamic Channel Ranking Strategy (DCRS) method is proposed to compress deep convolutional neural networks. DCRS selects channels with high contribution of each prunable layer according to compression ratio searched by reinforcement learning agent. Compared with current model compression methods, DCRS efficaciously applies various channel ranking strategies on prunable layers. Experiments indicate with a 50% compression ratio, compressed MobileNet achieved 70.62% top1 and 88.2% top5 accuracy on ImageNet, and compressed ResNet achieved 92.03% accuracy on CIFAR-10. DCRS reduces more FLOPS in these neural networks. The compressed model achieves the best Top-1 and Top-5 accuracy on ResNet50, the best Top-1 accuracy on MobilNetV1.