Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The research paper proposes a new way of reconstruction for the purpose of detecting brain tumors in Low Dose CT (LDCT) images. The suggested approach consists of LDCT images pre-processing steps before tumor detection, including Lucy–Richardson deconvolution to remove blurring caused by convolution with the point-spread function and additive noise in the first place, adaptive histogram equalization which is then used to improve contrast and clarify such deburred images on the other hand pixel normalization and elimination techniques are also employed in an effort to refine image quality. In LDCT reconstruction, Daubechies Transform integration with Tree Fusion based on Decision Tree algorithm is at the heart of this proposed method. This fusion technology tries its best to build up accurate representation for LDCT images so that it can enhance precise tumor detection. Furthermore, the evaluation of our method provided promising results showing 2.6342 msecs as reconstruction time and 26.6413KB as computational overheads. The model yields better results since it possesses high PSNR value (64.32dB). The precise number is even lower than 10.54 which means reconstructed images have little differences from originals one but still here we have good enough of SSIM 0.989 that reflects preservation of main structural information when performing this procedure making it more reliable throughout tumor recognition processes via SSIM indicator. This methodology gives a strong framework of denoising and reconstruction for boosting brain tumor identification within LDCT imaging, thereby exhibiting accuracy, efficiency, and image fidelity in terms of promising results achieved collectively
We propose a new scheduling algorithm for a three-dimensional grid precedence graph of size n, and we prove that the communication overhead is just in Θ(log log n). Finally, we show that a lower bound for the overhead of any schedule with a fixed number of processors (p ≥ 3) is Ω(log log n).
Memory allocator is an essential component of a program and it highly determines the overall performance of a program. The current general memory allocator falls short on performance because it is unable to determine the memory allocation information of the upper applications. This paper introduces an optimized mechanism that leverages compiler instrumentation to gather and analyze the upper-level memory allocation patterns. Experiments has shown that our optimized method could achieve better performance (10% increase in performance on average and 18% increase under intense conditions) while incurring low amounts of overhead.