Please login to be able to save your searches and receive alerts for new content matching your search criteria.
An efficient 3D visualization system has not only fast volume rendering algorithms but also effective navigation methods. Rendering speed is one of key technologies in most 3D visualization applications. We exploit software, Pentium 4 and graphics hardware technologies, such as threshold segmentation, Intel SIMD and 3D texturing, to obtain interactive volume rendering on a standard PC without specialized expensive hardware. Path planning is essential in many 3D visualization applications, such as virtual endocopy, in order to accelerate exploring. There are three major types of methods to extract the navigation path from a 3D data set, including manual, 3D distance transform and thinning based techniques. 3D thinning is a desirable method to extract skeletons of objects, but it has some severe problems to be solved. It is time consuming with discontinuity and small branches. An effective encoding and coordinates transform based scheme is presented to generate a look up table of 3D thinning templates to speed up path extracting, and a two-pass tracking technique is followed to trim the small branches of skeletons. Tri-pass cubic Bezier technique is proposed to decrease large curvatures caused by discrete representation of path. A smooth and C1 continuity navigation path is thus produced by our algorithms. Following this path, the camera moves and rotates smoothly without any dithering. Our system is very useful and can be widely applied due to full utilization of the existing inexpensive capabilities of PCs.
In this paper, a new approach named focal region-based volume rendering for visualizing internal structures of volumetric data is presented. This approach presents volumetric information through integrating context information as the structure analysis of the data set with a lens-like focal region rendering to show more detailed information. This feature-based approach contains three main components: (i) A feature extraction model using 3D image processing techniques to explore the structure of objects to provide contextual information; (ii) An efficient ray-bounded volume ray casting rendering to provide the detailed information of the volume of interest in the focal region; (iii) The tools used to manipulate focal regions to make this approach more flexible. The approach provides a powerful framework for producing detailed information from volumetric data. Providing contextual information and focal region renditions at the same time has the advantages of easy to understand and comprehend volume information for the scientist. The interaction techniques provided in this approach make the focal region-based volume rendering more flexible and easy to use.
In volume rendering, an important issue in acceleration is to reduce the calculations for occluded voxels. Although this issue has been addressed in the ray casting approach, it is difficult to apply the idea to the projection approach due to uncertain termination conditions. In this paper, we propose a new method to effectively address the exclusion problem in the projection approach, so the rendering process can be accelerated without impairing the rendered image quality. In the rendering process, this new method employs the dynamic screen technique to manage the pixels whose accumulated opacity has not reached 1.0. A ray-cast link at each pixel is set up to record the rendered voxels for the corresponding ray cast from the pixel to intersect. According to the rendered voxels covering the pixels whose accumulated opacity value is below 1.0, visible voxels are selected to render from front to back by the neighboring relationship between the rendered voxels and the voxels to be rendered. Thus, the occluded voxels are dynamically excluded from the loading and rendering processes accurately. Our proposed method can be in general applied to both parallel and perspective projections, using regular and irregular volume datasets. Our experimental results showed that the proposed method can significantly accelerate volume rendering if the data volume has a high percentage of occluded voxels. This method can also perform fairly efficiently for the expensive shading calculations if requested in volume rendering.
Volume data often have redundant information for clinical uses. The essence of volume rendering can be regarded as a mechanism to determine visibility of redundant information and structures of interest using different approaches. Controlling the visibility of these structures in volume rendering depends on the following factors in existing rendering algorithms: The data value of current voxel and its derivatives (used in transfer function based approaches), and the voxel position (used in volume clipping). This paper introduces the distance which is defined by the user into volume rendering pipeline to control the visibility of structures. The distance based approach, which is named as distance transfer function, has the flexibility of transfer functions for depicting data information and the advantages of volume clippings for visualizing inner structures. The results show that the distance based approach is a powerful tool for volume data information depiction.
The paper developed an approach to extract the VOI (Volume of Interest) from a CT dataset based on volume rendering, which can get a rough VOI from the volumetric data by simply adjusting the Window Level and the Window Width, then enhances the contrast among the voxels according to the Linear General Fuzzy Operator (LGFO) and extracts a desired structure from the above-mentioned enhanced 3D data through the feature function in rapid sequence; Our method adjusts the parameters according to the condition until the satisfied VOI is extracted. Experimental results show that the method combined with multi-manner can extract the VOI which represents clearly three dimensional anatomical structure of the object, such as tumors or normal organs, and can find potential applications in diagnosis and education.
Improving the image quality and the rendering speed have always been a challenge to the programmers involved in large scale volume rendering especially in the field of medical image processing. The paper aims to perform volume rendering using the graphics processing unit (GPU), in which, with its massively parallel capability has the potential to revolutionize this field. This work is now better with the help of GPU accelerated system. The final results would allow the doctors to diagnose and analyze the 2D computed tomography (CT) scan data using three dimensional visualization techniques. The system is used in multiple types of datasets, from 10 MB to 350 MB medical volume data. Further, the use of compute unified device architecture (CUDA) framework, a low learning curve technology, for such purpose would greatly reduce the cost involved in CT scan analysis; hence bring it to the common masses. The volume rendering has been done on Nvidia Tesla C1060 (there are 240 CUDA cores, which provides execution of data parallely) card and its performance has also been benchmarked.
The 3D volume visualization is to overcome the difficulties of the 2D imaging by using computer technology. A volume visualization approach has been successfully implemented for Surgical Planning System in National Neuroscience Institute (NNI). The system allows surgeons to plan a surgical approach on a set of 2D image slices and process into volume models and visualise them in 3D rapidly and interactively on PC. In our implementation, we have applied it in neurosurgical planning. The surgeon can visualize objects of interest like tumor and surgical path, and verify that the surgical plan avoids the critical features and the planning of the surgical path can thus be optimal.
The underwater acoustics is primary and most effective method for underwater object detection and the complex underwater acoustics battlefield environment can be visually described by the three-dimensional (3D) energy field. Through solving the 3D propagation models, the traditional underwater acoustics volume data can be obtained, but it is large amount of calculation. In this paper, a novel modeling approach, which transforms two-dimensional (2D) wave equation into 2D space and optimizes energy loss propagation model, is proposed. In this way, the information for the obtained volume data will not be lost too much. At the same time, it can meet the requirements of data processing for the real-time visualization. In the process of volume rendering, 3D texture mapping methods is used. The experimental results are evaluated on data size and frame rate, showing that our approach outperforms other approaches and the approach can achieve better results in real time and visual effects.
3-D surface reconstruction has been an important research topic in digital image processing for many years. Reconstruction from a few views is an ill-posed problem. To reduce uncertainty, the solution has to be regularized by incorporating some a priori information. This is the solution generally adopted for reconstruction methods described in the literature. Firstly, images are processed to remove air annulus. Then volume rendering is used to reconstruct the workpiece in Matlab environment. The cross section image resulted from the 2-D convolution projection reconstruction is the original image for 3-D reconstruction, which is image file. The noise caused by converting paper or film image into digital image is reduced in this way. Furthermore the additional step to match images is avoided by applying the fan-beam convolution projection algorithm that determines whether the images are matched. 3-D reconstruction and quantitative analysis can significantly improve the accuracy and reproducibility of the measurement of lesions and the estimates of luminal narrowing and geometry that characterize the hemodynamic extent and functional consequences of these lesions.