Hierarchical Structures For Dynamic Polygonal Simplification
This was the first paper, to my knowledge, that introduced dynamic
view-dependent polygonal simplification. It was submitted to SIGGRAPH
96, but justly rejected for being (1) not well-written enough, and (2)
containing "dropouts", or transient artifacts characterized by polygons
that disappear for a frame. My second attempt, a year later, addressed
both of these concerns and more, and was accepted into SIGGRAPH 97. So
though this paper may be of historical interest, I recommend readers
interested in view-dependent simplification check out our 1997 SIGGRAPH paper, a better written, more
informative, and up-to-date presentation of the algorithms described
here.
Developers interested in view-dependent simplification should also
check out VDSlib, a public-
domain library that implements the latest version of these algorithms.
Still under construction!
To be exact, I never fully webified the paper. Here is a not-quite-finished conversion of the
paper. The formatting is a bit strange and the one diagram is missing
but you may find it useful. Or you can download the paper as postscript or
PDF. Either way you'll want to look at the images below. Clicking
on an image will download a (very) high-resolution TIFF file.
In the meantime, here's the abstract:
This paper presents a novel technique for simplifying polygonal
environments. The technique is unique from previous multi-resolution
methods in that it operates dynamically, simplifying the scene
on-the-fly as the user's viewing position shifts, and adaptively,
simplifying the entire database without first decomposing the
environment into individual objects. Each frame the simplification
process queries a spatial subdivision of the model to generate a scene
containing only polygons "important" given the current viewpoint. This
spatial subdivision, an octree, classifies the polygons of the scene
according to which regions of space they intersect. When the volume
of space associated with an octree node occupies less than a
user-specified amount of the screen, all vertices within that node are
collapsed together and degenerate polygons filtered out. An active
list of visible polygons is maintained for rendering. Since
frame-to-frame movements typically involve small changes in viewpoint,
and therefore modify the active list by only a few polygons, the
method can take advantage of temporal coherence for greater speed. The
algorithm has been implemented and tested successfully on a wide range
of models, providing a 2x to 4x increase in rendering performance with
only slight degradation of image quality. On larger models, or in
situations where greater degradation is acceptable, the algorithm
should perform even better.