Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Single Image Super-Resolution (SISR) has seen significant advancements with the advent of deep learning techniques. However, many existing approaches face challenges such as high computational costs, poor generalization to unseen data and dependence on large paired datasets. This paper proposes a novel, lightweight Parallel Super-Resolution Convolutional Neural Network (PSRCNN) designed to address these limitations. PSRCNN leverages parallel feature extraction, a transposed convolutional upsampling layer and an efficient feature fusion strategy to balance performance and efficiency. Rigorous evaluations on established benchmark datasets demonstrate that PSRCNN achieves competitive performance, particularly in terms of the Structural Similarity Index (SSIM), a metric closely aligned with human visual perception. Moreover, the model showcases a significant advantage in computational efficiency, requiring fewer parameters than many recent Super-Resolution (SR) methods. PSRCNN presents a promising approach to SISR, demonstrating the potential of parallel CNN architectures for image SR tasks, as validated by ablation studies confirming the effectiveness of this design in enhancing image reconstruction quality. This approach is open to further enhancement.
Recently, deep convolutional neural networks (CNNs) have achieved great success in single image super-resolution (SISR). Especially, dense skip connections and residual learning structures promote better performance. While most existing deep CNN-based networks exploit the interpolation of upsampled original images, or do transposed convolution in the reconstruction stage, which do not fully employ the hierarchical features of the networks for final reconstruction. In this paper, we present a novel cascaded Dense-UNet (CDU) structure to take full advantage of all hierarchical features for SISR. In each Dense-UNet block (DUB), many short, dense skip pathways can facilitate the flow of information and integrate the different receptive fields. A series of DUBs are concatenated to acquire high-resolution features and capture complementary contextual information. Upsampling operators are in DUBs. Furthermore, residual learning is introduced to our network, which can fuse shallow features from low resolution (LR) image and deep features from cascaded DUBs to further boost super-resolution (SR) reconstruction results. The proposed method is evaluated quantitatively and qualitatively on four benchmark datasets, our network achieves comparable performance to state-of-the-art super-resolution approaches and obtains pleasant visualization results.
Recently, a method called Meta-SR has solved the problem of super-resolution of arbitrary scale factor with only one single model. However, it has a limited reconstruction accuracy compared with RDN+ and EDSR+. Inspired by Meta-SR, we noticed that by combining the core idea of Meta-SR and D-DBPN, we might construct a network that has as good image reconstruction accuracy as D-DBPN’s, at the same time, keeps arbitrary scaling function. According to Meta-SR’s Meta-Upscale Module, we designed a different structure called Meta-Downscale Module. By using these two different modules and back-projection structure, we construct an arbitrary back-projection network, which has the ability to enlarge images with arbitrary scale factor by using only one single model, meanwhile, obtains state-of-the-art reconstruction results. Through extensive experiments, our proposed method performs better reconstruction effect than Meta-SR and more efficient than D-DBPN. Besides that, we also evaluated the proposed method on widely used benchmark dataset on single image super-resolution. The experimental results show the superiority of our model compared to RDN+ and EDSR+.