We investigate the problem of finding the visible pieces of a scene of objects from a specified viewpoint. In particular, we are interested in the design of an efficient hidden surface removal algorithm for a scene comprised of iso-oriented rectangles. We propose an algorithm where given a set of n iso-oriented rectangles we report all visible surfaces in O((n+k)logn) time and linear space, where k is the number of surfaces reported. The previous best result by Bern [Journal of Computer and System Sciences 40 (1990) 49–69], has the same time complexity but uses O(nlogn) space.
This paper describes a course in image computation that is designed to follow and build up an established course in computer graphics. The course is centered on images: how they are generated, manipulated, matched and symbolically described. It builds on the student's knowledge of coordinate systems and the perspective projection pipeline. It covers image generation techniques not covered by the computer graphics course, most notably ray tracing. It introduces students to basic image processing concepts such as Fourier analysis and then to basic computer vision topics such as principal components analysis, edge detection and symbolic feature matching. The goal is to prepare students for advanced work in either computer vision or computer graphics.
We consider the problem of simplifying a planar triangle mesh using edge contractions, under the restriction that the resulting vertices must be a subset of the input set. That is, contraction of an edge must be made onto one of its adjacent vertices, which results in removing the other adjacent vertex. We show that if the perimeter of the mesh consists of at most five vertices, then we can always find a vertex not on the perimeter which can be removed in this way. If the perimeter consists of more than five vertices such a vertex may not exist. In order to maintain a higher number of removable vertices under the above restriction, we study edge flips which can be performed in a visually smooth way. A removal of a vertex which is preceded by one such smooth operation is called a 2-step removal. Moreover, we introduce the possibility that the user defines "important" vertices (or edges) which have to remain intact. Given m such important vertices, or edges, we show that a simplification hierarchy of size O(n) and depth O(log(n/m)) can be constructed by 2-step removals in O(n) time, such that the simplified graph contains the m important vertices and edges, and at most O(m) other vertices from the input graph. In some triangulations, many vertices may not even be 2-step removable. In order to provide the option to remove such vertices, we also define and examine k-step removals. This increases the lower bound on the number of removable vertices.
This paper presents a predictive solution to the problem of computing the intersections between a ray and a parametric polynomial surface patch. The method enhances the Bezier clipping technique which has emerged as an alternative to numerical and algebraic approaches. The improvement described here parallels the Aitken extrapolation used in accelerating convergence of numerical methods. We will first prove that, under certain conditions, the local parametric bounds in the Bezier clipping process are monotonically convergent over successive subdivisions. This implies that the de Casteljau step in each clipping tends to assume a certain pattern after sufficient subdivisions. Based on this observation, the global parametric estimate of an intersection is expressed as an infinite sum containing the local clip values. We then present a predictive scheme which achieves faster convergence than the original Bezier clipping technique.
Global parameterizations of parametric algebraic curves or surfaces are defined over infinite parameter domains. Considering parameterizations in terms of rational functions that have real coefficients and vary over real parameter values, we show how to replace one global parameterization with a finite number of alternate bounded parameterizations, each defined over a fixed, bounded part of the real parameter domain space. The new bounded parameterizations together generate all real points of the old one and in particular the points corresponding to infinite parameter values in the old domain. We term such an alternate finite set of bounded parameterizations a finite representation of a real parametric curve or surface. Two solutions are presented for real parametric varieties of arbitrary dimension n. In the first method, a real parametric variety of dimension n is finitely represented in a piecewise fashion by 2n bounded parameterizations with individual pieces meeting with C∞ continuity; each bounded parameterization is a map from a unit simplex of the real parameter domain space. In the second method, only a single bounded parameterization is used; it is a map from the unit hypersphere centered at the origin of the real parameter domain space. Both methods start with an arbitrary real parameterization of a real parametric variety and apply projective domain transformations of different types to yield the new bounded parameterizations. Both these methods are implementable in a straightforward fashion. Applications of these results include displaying entire real parametric curves and surfaces (except those real points generated by complex parameter values), computing normal parameterizations of curves and surfaces (settling an open problem for quadric surfaces).
We define a new model of complexity, called object complexity, for measuring the performance of hidden-surface removal algorithms. This model is more appropriate for predicting the performance of these algorithms on current graphics rendering systems than the standard measure of scene complexity used in computational geometry.
We also consider the problem of determining the set of visible windows in scenes consisting of n axis-parallel windows in ℝ3. We present an algorithm that runs in optimal Θ(n log n) time. The algorithm solves in the object complexity model the same problem that Bern3 addressed in the scene complexity model.
An n-vertex simple polygon P is said to have a Hamiltonian Triangulation if it has a triangulation whose dual graph contains a hamiltonian path. Such triangulations are useful in fast rendering engines in Computer Graphics. We give a new characterization of polygons with hamiltonian triangulations and use it to devise O(n log n)-time algorithms to recognize such polygons. We also give efficient algorithms for several related problems.
This paper presents two self-similar models that allow the control of curves and surfaces. The first model is based on IFS (Iterated Function Systems) theory and the second on subdivision curve and surface theory. Both of these methods employ the detail concept as in the wavelet transform, and allow the multiresolution control of objects with control points at any resolution level.
In the first model, the detail is inserted independently of control points, requiring it to be rotated when applying deformations. In contrast, the second method describes details relative to control points, allowing free control point deformations.
Modeling examples of curves and surfaces are presented, showing manipulation facilities of the models.
The process of mesh coarsening describes the reduction and simplification of a highly detailed mesh in such a way that the original geometry is best possible preserved. The earliest coarsening algorithms were developed for real time visualizations in the upcoming field of computer graphics and later on adopted for some element topologies of the finite element method (FEM). An algorithm is presented in this article which applies the mesh decimation process to the requirements of both the FEM and the Boundary Element Method (BEM) in acoustics. The capabilities of the algorithm in order to significantly reduce the CPU times for both numerical methods are shown.
A recently developed topological mesh modeling approach allows users to change topology of orientable 2-manifold meshes and to create unusual faces. Handle-faces are one of such faces that are commonly created during topology changes. This paper shows that vertex insertion and corner cutting subdivision schemes can effectively be used to reconstruct handle-faces. These reconstructions effectively show the structure of these unusual faces. The paper has three contributions. First, we develop a new corner cutting scheme, which provides a tension parameter to control the shape of subdivided surface. Second, we develop careful and efficient remeshing algorithms for our comer cutting scheme that use only the basic operations provided by our topological mesh modeling approach. This implementation ensures that our new corner cutting scheme preserves topological robustness. Finally, a comparative study shows that the corner cutting schemes create better handles and holes than the well-known Catmull-Clark scheme.
Meshes, which generalize polyhedra by using non-planar faces, are the most commonly used objects in computer graphics. Modeling 2-dimensional manifold meshes with a simple user interface is an important problem in computer graphics and computer aided geometric design. In this paper, we propose a conceptual framework to model meshes. Our framework guarantees topologically correct 2-dimensional manifolds and provides a new user interface paradigm for mesh modeling systems.
In this paper we are presenting a novel approach that enables the rendering of large-shared datasets at interactive rates using inexpensive workstations. Our algorithm is based on view-dependent rendering and client-server technology — servers host large datasets and manage the selection of the various levels of detail, while clients receive blocks of update operations which are used to generate the appropriate level of detail in an incremental manner. We assume that servers are capable machines in terms of storage capacity and computational power and clients are inexpensive workstations which have limited 3D rendering capabilities. For optimization purposes we have developed two similar approaches — one for local area networks and the other for wide area networks. For the second approach we have performed several changes to adapt to the limitation of the wide area networks. To avoid network latency we have developed two powerful mechanisms that cache the adapt operation blocks on the clients' side and predict the future view-parameters of clients based on their recent behavior. Our approach dramatically reduces the amount of memory used by each client and the entire computing system since the dataset is stored only once in the local memory of the server. In addition, it decreases the load on the network as a result of the incremental update contributed by view-dependent rendering.
There are many algorithms based on computation of intersection of lines, planes etc. Those algorithms are based on representation in the Euclidean space. Sometimes, very complex mathematical notations are used to express simple mathematical solutions. This paper presents solutions of some selected problems that can be easily solved by the projective space representation. Sometimes, if the principle of duality is used, quite surprising solutions can be found and new useful theorems can be generated as well. It will be shown that it is not necessary to solve linear system of equations to find the intersection of two lines in the case of E2 or the intersection of three planes in the case of E3. Plücker coordinates and principle of duality are used to derive an equation of a parametric line in E3 as an intersection of two planes. This new formulation avoids division operations and increases the robustness of computation. The presented approach for intersection computation is well suited especially for applications where robustness is required, e.g. large GIS/CAD/CAM systems etc.
This paper presents a set of information and communication technologies developed with the goal to improve the practice of cued speech. They are based on three-dimensional (3D) graphics and are covering the entire end-to-end content chain: production, transmission, and visualization. Starting from defining a set of potential applications, the requirements of the targeted system for cued speech are set. The research and development path takes into account high-quality animation, real-time constraints, personalization, user acceptability and, equally important, the easiness and feasibility of the deployment. The latter led to a strong orientation toward open standards. The original components of the system include 3D graphics and animation encoders, streaming servers, and visualization engines. The core technology is validated in two real-time applications: a web service for text to animation conversion and a chat service supporting two or more users.
This paper deals with the problem of detecting "weak symmetry" in a polygon, which is a special bijective and continuous mapping between the vertices of the given polygon. An application of this work is the automatic reconstruction of 3D polygons symmetric with respect to a plane from free-hand sketches of weakly-symmetric 2D polygons. We formalize the weak-symmetry notion and highlight its many properties which lead to an algorithm detecting it. The closest research work to the proposed approach is the detection of skewed symmetry. Skewed symmetry detection deals only with reconstruction of planar mirror-symmetric 3D polygons while our method is able to identify symmetry in projections of planar as well as nonplanar mirror-symmetric 3D polygons.
Current repairing process for worn-out blades and blisks of aeroengines and industrial gas turbines is highly manual, labor experience-based and hence error-prone. It requires more sophisticated CNC-driven laser equipment to replace manual welding for the repair. This paper presents an innovative strategy that will lead to the automation of laser welding and cladding. The project makes use of reverse engineering techniques to capture the geometric shape of the broken area by a digitized point cloud and nominal geometry. The core software technologies include four aspects: (i) point-to-surface-best-fitting technology that puts the point cloud coordinate system nominal CAD coordinate system, (ii) a procedure to extract a broken boundary that automatically separates the broken area and the unbroken area, (iii) geometric model representation for the broken area, and (iv) generation of a STL file to drive CNC laser machine. The tool has been implemented with UG API. Some experimental results have shown that the presented strategy is efficient for repair automation.
The objective of this paper is to give a study on Bézier and B-Spline curves. The paper has divided into three parts. The first studies Bézier curves and their mathematical solutions, while the second describes B-spline curves and their mathematical solutions. Finally, the application of B-spline curves in wavelet decomposition is shown. The PeakSignal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are considered for evaluation, and it is justified that Bézier and B-spline curves are indispensable in the field of computer graphics.
This paper presents an effective method based on genetic algorithm for optimizing the rendering quality in image-based painterly rendering. Based on a multi-level evolutionary approach, the proposed method produces, for a variety of input images, results that are better in a statistically significant way than previous methods.
Rendering is a computer graphics process that generates images from a three-dimensional scene including models, lights and cameras. Ray tracing is a category of rendering algorithms that calculates pixel colors by simulating the physical movements of light rays such as reflections, refractions and scatterings. The core problem of ray tracing is a high-dimensional numerical integration, and the classical Monte Carlo ray tracing requires a big number of rays to reduce error. By leveraging the inherent parallelism of quantum computing, we propose a framework of quantum ray tracing algorithm, and do simulation experiments to prove that it has a quadratically faster error convergence than classical Monte Carlo ray tracing.
In this paper, we propose a new fractal deformation technique. An “extended unit Iterated Shuffle Transformation (ext-unit-IST)” is a mapping that changes the order of the places of a code on a code space. When it is applied on a geometric space, it constructs a fractal-like repeated structure, named “local resemblance”. In our previously proposed fractal deformation technique, a geometric shape was deformed by applying an ext-unit-IST to displacement vectors (d-vectors) given on the shape. In the new technique proposed in this paper, the ext-unit-IST is applied to the increasing rates of the d-vectors. This allows the d-vectors to change widely without disturbing the shape and improves the deformation quality. Several examples demonstrate the performance of the newly proposed technique.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.