The ability to produce volume-rendered images interactively opens the door to a host of new application capabilities. Volumetric data is commonplace today. Radiologists use magnetic resonance images (MRI) and computed tomography (CT) data in clinical diagnoses. Geophysicists map and study three-dimensional voxel Earth models. Environmentalists examine pollution clouds in the air and plumes underground. Chemists and biologists visualize potential fields around molecules and meteorologists study weather patterns. With so many disciplines actively engaging in the study and examination of three-dimensional data, today's software developers need to understand techniques used to visualize this data. You can use three-dimensional texture mapping, an extension of two-dimensional texture mapping, as the basis for building fast, flexible volume renderers.
This article tells you how to build an interactive, texture mapping-based volume renderer in OpenGL. The article also includes a pseudo-coded volume renderer to help illustrate particular concepts.
Understanding Volume Rendering
Volume rendering is a powerful rendering technique for three-dimensional data volumes that does not rely on intermediate geometric representation. The elements of these volumes, the three-dimensional analog to pixels, are called voxels. The power of volume-rendered images is derived from the direct treatment of these voxels. Contrasting volume rendering with isosurface methods reveals that the latter methods are computationally expensive and show only a small portion of the data. On the other hand, volume rendering lets you display more data, revealing fine detail and global trends in the same image. Consequently, volume rendering enables more direct understanding of visualized data with fewer visual artifacts.
All volume-rendering techniques accomplish the same basic tasks: coloring the voxels; computing voxel-to-pixel projections; and combining the colored, projected voxels. Lookup tables and lighting color each voxel based on its visual properties and data value. You determine a pixel's color by combining all the colored voxels that project onto it. This combining takes many forms, often including summing and blending calculations. This variability in coloring and combining allows volume-rendered images to emphasize, among other things, a particular data value, the internal data gradient, or both at once. Traditionally, volume renderers produce their images by one of three methods:
In contrast to the three traditional methods just cited, this article discusses a volume-rendering method that uses the OpenGL graphics API. The primitives and techniques that this method integrates are common graphics principles, not generalized mathematical abstractions.
The basic approach of this method is to define a three-dimensional texture and render it by mapping it to a stack of slices that are blended as they are rendered back to front. Figure 1A (below) shows a stack of slices. The first step of this technique involves defining one or more three-dimensional textures containing the volumetric data. The next step involves using lookup tables to color textures. A stack of slices is drawn through the volume with the colored texture data mapped onto the slices. The slices are blended in front-to-back order into the framebuffer. You can vary the blending type and lookup tables to produce different kinds of volume renderings.
The advantage of this technique is that you can implement it with presently available hardware-accelerated, OpenGL graphics architectures. As a result, you can produce volume-rendering applications that run at interactive rates. This method also preserves quality and provides considerable flexibility in the rendering scheme.
Before you can render volumetric data, you must define it as a texture. You can load the data as one three-dimensional texture into OpenGL if it is not too large for the texturing subsystem, but this is uncommon since hardware texture mapping OpenGL architectures typically place a limit on the size of the largest texture. This article assumes that size is an issue, because the hardware size limit is often significantly smaller than that of the average data set that you might volume render.
You can work around this limit by dividing your data into multiple textures. The most common method of doing this is to divide the volume into small bricks that are each defined as a single texture. These bricks are rectangular subvolumes of the original data, which are often created by cutting the volume in half (or thirds) in one or more dimensions.
Since you composite each slice on top of the others with order-dependent blending operations, you must ensure a total ordering of all the rendered slices for each pixel. If you do not maintain the order, the blending operations used to combine slices will produce an incorrect result. The most common ordering is back to front. Texels farthest from the eye point are drawn first and those closest are drawn last.
To preserve the total ordering of all slices at each pixel, the bricks are rendered one at a time. To preserve the ordering, the bricks are sorted and rendered from back to front. The bricks farthest from the eye are rendered before closer ones.
Perhaps the most important feature afforded by volume rendering is classification, which is the process of grouping similar objects inside a volume. You can use classification to color these groups. For example, in a volume of CT density data, you could color bone white and tissue red. Not only can you color objects, but you can also remove the objects in a given group from the scene altogether. You can also make groups semi-transparent, enabling you to view both the structure of the top group as well as the groups beneath it.
One method of data classification involves passing the original volumetric data through lookup tables. These tables select the color and opacity to represent each data value. OpenGL supports many different lookup tables at several places along the imaging pipeline. You want to use the texture- lookup tables, which occur after the texturing system. Using these tables allows the texturing system to interpolate the actual input data and then look up the result, rather than interpolating the looked-up color and opacity values. For architectures that do not support these lookup tables, use the OpenGL color table or pixel map.
Once you classify the volumetric data and make it available as texture data, you render it by texture mapping it to a collection of polygons. The best way to sample the texture data is to render the volume as parallel, equidistant texture-mapped slices that are orthogonal to the viewing direction. You clip these slices to the extents of the volume data to create polygons. Figure 1A shows these sampling slices untextured; Figure 1B shows the same slices textured. The object in the volume (a skull) is vaguely discernible. Adding more slices would improve the image.
The number of slices rendered is directly related to both quality and speed. The more slices that you render, the more accurate the results, but the longer the rendering time. The number of slices required to adequately sample the data in the viewing direction is a function of the data itself as dictated by the Nyquist rate. The texture data is always sampled at screen pixel resolution in the directions parallel to the viewing plane, due to the nature of the OpenGL texture-mapping capability. To increase the quality of the results, have OpenGL resample the texture linearly.
This viewer-orthogonal sampling technique has two major advantages:
Blending the slices computes the discrete form of an integral. The result is equivalent to integrating along a ray leaving the eye and passing through the screen and then the volumetric data. Each slice represents an area along this ray. To compute this integral correctly, the slices must be attenuated when the area that they represent decreases. This means that as more slices are rendered, each one must get dimmer in order to maintain a constant brightness for the image. If you examine the integral, this attenuation works out to be the exponential of the distance between slices. You can compute this attenuation and use it as the alpha value for the slice polygons when you render them.
Another way to render the volumetric data is called multi-planar reconstruction (MPR). Only one slice is drawn in an MPR rendering. You can orient this slice at any angle and then use it to inspect the interior structures in the data. Maintaining a high-quality (at least linear) resampling of the data during MPR rendering is crucial. Multi-planar reconstruction can be fast and informative, but the resulting images are generally not considered to be volume renderings.
Determining the Texture Coordinates
When mapping the texture data to slices, you must be very careful. OpenGL texture space mapping can be tricky, and failure to map the texture data to your slices correctly can produce visual artifacts. Even though OpenGL specifies that textures lie in a [0.0,1.0] coordinate space in each dimension, the texture data you provide is not mapped onto the entire region. Figure 2 shows a one-dimensional, four-texel texture. Notice that the [0.0,1.0] space is divided into four regions, one for each texel. The texels are defined at the center of each region. Coordinates between two centers are colored by combining the values of both nearby texels. The texture coordinates before the first texel center or after the last are colored by both the corresponding edge texel and the border. Volume slices should never be colored by the border. Therefore, you should avoid specifying coordinates before the first texel center or after the last. Also notice that the location of the edge texels' centers move as the texture size changes.
A more subtle texture-coordinate problem is caused by using multiple textures. Using multiple three-dimensional textures causes seams between adjacent bricks. The seams are created because the edges of adjacent bricks are not interpolated. Figure 3A shows the result when a four-texel, one-dimensional checkerboard texture is mapped to a simple rectangle. The seam in Figure 3B is created when the single texture is divided into two two-texel textures. These seams also appear between adjacent bricks when volume rendering. To correct this problem, duplicate the edge texels in both bricks so that each one appears twice. This causes both bricks to grow by one to three texels. By adjusting the texture coordinates at the vertices-so that each brick handles half of the previously missing region-you can remove the seam. Figure 3C shows these two textures with the corrected texel coordinates. The seam is gone.
Compositing the Textured Slices
After creating the brick textures, mapping them to slices, and segmenting the data, you are ready to composite the slices. The most common compositing scheme for back-to-front volume rendering is called over blending. Each compositing scheme is good at accenting different kinds of internal data structures, and the ability to switch between them interactively is extremely useful.
OpenGL supports a collection of framebuffer blending operations that you can use to construct different compositing operations. You can simulate over blending by specifying addition as the OpenGL blend equation. The blend function multiplies the incoming slice by its alpha value, and multiplies the framebuffer pixels by one minus the source alpha value. You can produce maximum-intensity blending by setting the OpenGL blending equation to the maximum operator. You can produce other blending types by using different combinations of the OpenGL blend function and blend-equation parameters.
You can display related opaque geometric data in concert with volume- rendered data. Geometric data is correctly sorted and blended if it lies inside the volume. You can also combine traditional geometric data renderings with volumes, producing, for example, a surface embedded in a cloud of data.
The OpenGL zbuffer allows you to sort opaque objects. Semi-transparent objects are more complex because, unlike opaque objects, their rendering result is order-dependent. However, if your scene contains only opaque geometry with the volumetric data, then you can combine the two by doing the following:
This data is correctly sorted and blended on top of the geometric data where appropriate.
You can examine examples of the techniques described in this article as source-coded implementations. The volume renderers are available via anonymous ftp. Vox is a compact, simple renderer perfect for studying the basic principles of a volume-rendering application. Volren, which is larger and more complex, supports all the functions discussed in this article plus quite a bit more. However, the volren code is significantly more complex. You can find these volume renderers at the following site:
Anonymous FTP site: sgigate.sgi.com Vox distribution: /pub/demos/vox.tar.Z Volren distribution: /pub/demos/volren-<version>.tar.Z
Pseudo-Coding a Volume Renderer
Draw() { /* enable the depth buffer for reading and writing */ glEnable(GL_DEPTH_TEST); glDepthMask(TRUE); /* draw opaque geometry */ ... /* enable the depth buffer for reading only */ glDepthMask(FALSE); /* load the classification lookup tables */ glColorTableSGI(GL_TEXTURE_COLOR_TABLE_SGI, GL_RGBA, ...); glEnable(GL_TEXTURE_COLOR_TABLE_SGI); /* Compute the per slice alpha weight */ alphaWeight = exp(interSliceDistance); glColor4f(1.0, 1.0, 1.0, alphaWeight); /* setup the compositing function */ if over blending then { glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glBlendEquationEXT(GL_FUNC_ADD_EXT); } if maximum intensity blending then { glBlendEquationEXT(GL_MAX_EXT); } /* sort the bricks back-to-front from eye point */ ... /* enable texturing and blending */ glEnable(GL_TEXTURE_3D_EXT); glEnable(GL_BLEND); /* render the bricks */ for each brick B in sorted order do { /* load brick B */ glTexParameter(GL_TEXTURE_3D_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameter(GL_TEXTURE_3D_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage3DEXT(GL_TEXTURE_3D_EXT, ...); /* sort the slices back-to-front from eye point */ ... for each slice S in brick B in sorted order do { /* compute spatial coordinates of slice S */ intersect slice with brick extents /* compute texture coordinates of slice S */ compute corresponding texture domain coordinates /* adjust texture coordinates to account for seams */ scale back texture coordinates /* render the slice */ render slice S } } }
Volume rendering is a powerful, flexible way to visualize data. Previously available only through slow, software solutions, it is now available through OpenGL, a standardized, platform-independent programming interface. OpenGL enables volume-rendering solutions that are fast, portable, cost-effective, and maintainable. Using OpenGL, you can produce software products that combine geometry, vectors, polygons, and volumes with the ability to integrate new features rapidly.
Sabella, Paolo, "A Rendering Algorithm for Visualizing 3D Scalar Fields," Computer Graphics (SIGGRAPH '88 Proceedings) 22(4) pp. 51-58 (August 1988).
Upson, Craig and Keeler, Michael, "V-BUFFER: Visible Volume Rendering," Computer Graphics (SIGGRAPH '88 Proceedings) 22(4) pp. 59-64 (August 1988).
Drebin, Robert A., Carpenter, Loren, and Hanrahan, Pat, "Volume Rendering," Computer Graphics (SIGGRAPH '88 Proceedings) 22(4) pp. 65-74 (August 1988).
Cabral, Brian, Cam, Nancy, and Foran, Jim, "Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware," Proceedings 1994 ACM/IEEE Symposium on Volume Visualization (IEEE CS Press) pp. 91-97 (Order No. PR07067, 1995).
Many thanks are due to Bob Drebin, Nancy Cam and Jim Foran for their interesting work and ideas as well as technical editing skills. Brian Cabral was instrumental in introducing me to this work. All four are the true creators and owners of the ideas discussed here. Additionally, Anatole Gordon deserves accolades for his dedicated editing work on this article.