Point Sample Rendering
Aravind Kalaiah and Amitabh Varshney


Surfaces have been traditionally rendered using triangles as a display primitive. The last decade has witnessed a tremendous growth in the complexity of the graphics dataset sizes. The sizes of the graphics datasets today are far greater than the resolution of the display devices. As a result, a significant fraction of the triangles occupy a screen-space area that is smaller than or comparable to the size of a single pixel. Using triangles as the fundamental rendering primitive is highly wasteful in such scenarios. We advocate the use of points as a more suitable rendering primitive for such mega models. If the surface sampled at a sufficiently high rate such that the screen-space distance between the sample points is less than a pixel's width, point-based rendering schemes offer an efficient and viable alternative to triangle-based rendering. We have developed a point-based rendering scheme than displays points with normals and local curvature information efficiently. Our scheme is superior to other point-based rendering schemes that do not take advantage of the surface curvature information for local illumination and shading.


(a) Diffuse (b) Specular (c) Diffuse + Specular

Apart from efficiently rendering finely sampled surface areas, a general point primitive also has to have the underlying capability to render surface areas which may not be densely sampled. This requires that the point primitives be able to correctly shade the pixels around their point of projection while making sure that: (1) there are no holes (or pixel gaps) in the rendered surface, and (2) the pixels are correctly shaded in conformance with the surface geometry. Moreover, given that a surface can have varying degree of complexity, the point primitive must support adaptive sampling of the surface so that low frequency surface areas can be sampled sparsely. We handle these issues by using the surface curvature information at the sampled points, to derive our rendering primitive, called Differential Point (DP). The curvature information is used to derive a local surface geometry at each DP which approximates the underlying surface near that point. This surface approximation is used to derive the local normal distribution at each point which is in turn used for shading. The size of the approximating surface is set to be inversely proportional to the point curvatures so that points from high curvature areas have small local geometry while points in low curvature areas can represent larger areas of the underlying surface. This also allows us to adjust the point sampling density in accordance with the local surface complexity.


(a)Supersampled (b) Simplified
Effectiveness of simplification
(a) Original (b) Simplified
Cyberware Venus model

The overall computation is split into two stages: (1) pre-processing (sampling, simplification, and texture computation) and (2) run-time computation (transformation and shading). To start with, the geometric model is sampled for DPs and the principal curvature values and directions are computed at each sampled point. This is followed by a simplification process which compares the curvature characteristics of each DP with that of its neighbors and prunes away the redundant DPs. The DPs which resemble their neighbors in curvature properties are assigned a higher priority for pruning. The bulk of the time spent during simplification involves checking to ensure that there are no gaps left over in the surface coverage due to the act of pruning DPs.

The differential points are categorized into 256 varieties based on the relative combination of the principal curvature values. As a preprocess, the normal distribution of each of these quantized DPs is computed and stored as a texture map. Then a rectangle is placed on the tangent plane of each DP, with its width and height being proportional to the principal curvatures. At runtime, the surface approximation at each DP is rendered as a normal mapped rectangle. The shading involves computing the light vector and the halfway vector at the ends of the surface approximation and mapping them to the vertices of the rectangle. These vector parameters are then used by the nVIDIA register combiners at the GPU which does the necessary multiplication and the vector dot products to obtain shading on a per-pixel basis


(a)No encoding (b) Each DP encoded in 13 bytes

We demonstrate our work on both parametric surfaces (NURBS) and triangle mesh models. In case of a NURBS surface the component patches are sampled uniformly in the parametric domain and simplified independent of each other. For the triangle mesh the vertices were used as the sample points. The number of primitives being equal, DPs produce a much better quality of rendering than a pure splat-based approach. Visual appearances being similar, DPs are about two times faster and require about 75% less disk space in comparison to splatting primitives. They fared well in comparison to triangle based-approach which have similar rendering speeds. All the test cases were run on a 866MHz Pentium 3 PC with 512MB RDRAM and having a nVIDIA GeForce2 GPU supported by 32MB of DDR RAM. The (256) normal maps were mip-mapped textures of resolution 32 x 32.

(a) Differential Points (b) Square Primitives (c) Rectangle Primitives (d) Elliptical Primitives
Comparison of rendering quality for the same number of rendering primitives representing the Utah teapot (157K points)


Our approach has many benefits to offer:

  1. Rendering: The surface can be rendered with fewer (point) primitives by pushing more computation into each primitive.
  2. Storage: The reduction in the number of primitives more than compensates for the extra bytes of information stored with each point primitive, thus achieving a significant reduction in storage.
  3. Generality: The information stored with our point primitives is sufficient to derive (directly or indirectly) the requisite information for prior point primitives.
  4. Simplification: DPs are amenable to a simplification scheme that significantly reduces the redundancy in surface representation.



We would like to acknowledge the following sources:

This work is based upon the work supported by the National Science Foundation under grants ACR-98-12572 and IIS-00-81847. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

© Copyright 2013, Graphics and Visual Informatics Laboratory, University of Maryland, All rights reserved.