**An Implementation Of A Recursive Ray Tracer
That Renders Caustic Lighting Effects**

**Shu Chiun Cheah**

**Summer 1996**

**(independent study with Prof. David Mount)**

**Dept. of Computer Science,**

**University of Maryland.**

In this project, we implemented a ray tracer which was able to generate caustic light patterns. The program incorporated a method inspired by Henrik Wan Jensen's photon map approach to rendering caustics [6,7,8]. We have also made use of ideas taken from the Phong model for non-caustic illumination computation. Moreover, we used bump mapping to model wavy water surfaces. This paper will discuss the methods implemented. Our program has successfully rendered several images containing caustic effects. The representative images will be shown and discussed.

**1. Overview **

Caustic provides some of the most amazing indirect illuminated light patterns one could find in nature. Caustic light patterns are formed when light rays reflected from or transmitted through specular surfaces strikes a diffuse surface [7]. Most commonly, the specular surfaces are of high curvature, which causes light rays to be rather focused in their reflected and transmitted directions.

Traditional ray tracing, also known as *ray casting*, as it was
originally developed by Appel [1] and by Goldstein and Nagel [2][3] was
a method for determining visible surfaces in a scene. Ray tracing as introduced
by Appel was also able to generate shadows. Later, Whitted [4] and Kay
[5] extended ray tracing to handle refraction and specular reflection.
However, these traditional methods could not render indirect illumination;
thus, were not able to render caustic light patterns. James Arvo [9] extended
ray tracing to include a preprocessing step in which caustics are computed.
In Arvo's approach, energy packets, also called *photons*, are transmitted
from each light source. The path of each energy packet is traced through
the scene, and the photons are deposited into an empty texture map on the
particular diffuse surface it eventually strikes. Therefore, caustics are
rendered as texture maps on diffuse surfaces. Jensen [7] proposed the photon
map [6][8] method that had the same basic idea as Arvo's in tracing photons
from light sources. In addition to storing the location of each photon
being deposited, he would also store the direction of incidence.

In this project, we implemented a simpler version of Jensen's photon map approach to rendering caustics. In particular, our purpose is to render images of underwater scenes which contain caustic light patterns caused by a wavy water surface using the program implemented. Furthermore, the program is only able to compute illumination of Lambertian surfaces. The kinds of objects supported by the current version of our program are spheres and convex-polyhedra. Convex polyhedra are modeled as a set of intersecting halfplanes. Although limited, these objects are sufficient for us to experiment with rendering caustic light patterns. The details on the modeling of these objects are out of the scope of this paper.

Section 2 describes a simplified version of Jensen's photon map technique [6,7,8]. In section 3, we will present the images generated by our ray tracer. Finally, section 4 concludes the paper as well as points out areas of possible improvements of our program.

**2. Methods**

In this section, we will assume that the readers are familiar with the tradition recursive ray tracing method as described in [1], [2], [3], [4], and [5]. We would like to refer the readers to [10] and [12] for detailed explanations of various ray tracing techniques.

The program being described here is implemented in C++. The code was pretty much platform independent, and thus, can be easily ported to any platforms on which C++ compilers are available. The program outputs the image in ppm (portable pixmap) file format.

**2.1. Light Sources**

Two kinds of light sources are supported in our program. These are point light sources and directional light sources. For point light sources, the user is allowed to specify the following information:

- The location of the point in Cartesian coordinates

- The intensity in RGB values between 0 and 1

- The outgoing light ray resolution (which will be described later)

- The attenuation coefficients in the equation

in which *a*, *b*, and *c* are the coefficients and *d*
is the distance of the illumination query point from the light source location.

- The energy level per photon specified as a factor of intensity RGB values, e.g. if factor is 0.1, the energy per photon is 0.1 * intensity.

For directional light sources, our model assumes that they all originate from planes parallel to the xy-plane in the Cartesian coordinate system. Thus, all but the directions parallel to the xy-plane are supported. The information required from the user for a directional light source includes the following:

- The z-coordinate of the plane as a real number.

- The parameterization of points from which photons will be fired. The program currently supports polar coordinate parameterization and Cartesian coordinate parameterization.

- The lower bound and upper bound of each of the two parameterization variables. For polar coordinates, the two variables are theta, the angle in degrees from the y-axis, and r, the distance from the origin. For Cartesian, they are x and y.

- The resolution of each parameterization variable. i.e., the distance between two consecutive points along the direction of increasing one of the variables.

- The light intensity in RGB values.

- The energy level per photon as a factor of the light intensity.

- For polar parameterization, one may also specify a bounding box such that only the points within which will fire photons in the preprocessing stage.

**2.2. Preprocessing**

The basic idea of preprocessing is to shoot out photons (energy packets) from each light source, trace them through the scene, and keeps track of those photons that are transmitted through or reflected by specular surfaces and land on diffuse surfaces.

During preprocessing, point light sources are modeled as unit spheres
centered at the location of the respective light sources. Photons are fired
from the center of each light source sphere through points located on the
sphere. These points are described in spherical coordinates with respect
to the origin located at the center of the sphere. Since photons are fired
through these points, each photon's trajectory direction is identified
by *theta* and *phi*. *r* is not necessary since the direction
of the photon's trajectory is independent of the radius of the sphere.
Phi here is the angle in degree from the z-axis while theta is the angle
from the y-axis, both clockwise. The user is allowed to specify the lower
bound and upper bound for each of theta and phi. Besides that, the distance
between two consecutive values for each of the variables needs to be specified
as well.

Directional light sources, as mentioned in Section 2.1, are modeled as planes. Unlike the point light sources, however, we have two variables changing the points where photons are fired rather than the direction of their trajectories. The direction of the photon trajectories, of course, is fixed and is specified by the user. The user has two options in parameterizing the points of fire. The first option is to identify each point with an xy-coordinate with respect to the origin, which is the point (0,0,z) on the plane. The second option is to identify each point with a polar-coordinate with respect to the point r=0 and the y-axis. The former will lead to a bunch of points lined up in grids on the plane, while the latter will end up with the points lined up in circles of various radii centered at the origin. As in the case of point light sources, the user is required to specify the lower bound and upper bound of each variable as well as the distance between two consecutive values for each variable.

For each light source in the scene, photons are fired by varying the two variables, as mentioned above, of the light source from their lower bounds to their upper bounds, incrementing at the distance specified by the user. The path of each photon is traced from the scene. Photons which do not hit any specular surfaces or transparent surfaces are ignored.

For each photon that hits a transparent surface, Snell's law[13] is
used to compute the new direction of the photon being transmitted into
another medium. According to Snell's law, the angle,q
_{t}, made by the transmitted direction with the normal vector
is given by the equation below.

in which q _{i} is the angle between
the incident direction and the normal vector,

q _{t }is the angle between the incident
direction and the normal vector,

h _{i} is the index of refraction
of the medium at the incident side, and

h _{i} is the index of refraction
of the medium at the transmission side.

Furthermore, the intensity, or energy level, of the photon is altered according to the color of the surface at the incident point. In other words, the photon color is "filtered." The new energy level for each photon that hits a specular surface is given by the following equation.

in which *k _{t}* is the transmission coefficient,

*photon_energy* is the energy level of the photon, and

*color* is the RGB color of the specular surface.

And, the outgoing direction of the reflected photon is given by

in which *R *is the reflected direction,

*N *is the normalized normal vector of the surface at the incident
point, and

*L *is the normalized vector in the opposite direction as the incident
direction.

The new energy level is given by the following equation.

in which *k _{r }*is the reflection coefficient,

*photon_energy* is the energy level of the photon, and

*color *is the RGB color of the specular surface.

Although not physically accurate, when a photon hits a surface that is both specular and transparent, a new photon will be created for the reflection. The paradox is due to the fact that each of our photon actually tries to simulate a group of real-world "photons," some of which get reflected, some get absorbed, and some transmitted when a specular and transparent surface is hit.

When a photon that has been transmitted or reflected at least once hits a diffuse surface, the photon will be inserted into a caustic photon map [6][7][8]. The information being stored with each photon includes the energy level of the photon, the incidence direction of the photon, and the incidence point of the photon. In our program, we implemented the photon map as a kd-tree[14,15] structure. The photons are inserted into the kd-tree dynamically as photons from each light source are traced.

The end of preprocessing is marked by having traced all the photons from all the light sources in the scene. At this point, caustic light patterns may be rendered by querying the caustic photon map.

**2.3. Waves**

In order to generate underwater scenes, we need to model simple waves on a plane that represents the water surface. In this program, we adapted the method of bump mapping[10,11] since it is not necessary for us to actually perturb the surface. Furthermore, we restrict all water surface planes to be parallel to the xy-plane. The plane is then parameterized into x and y, and the normal vector for each (x,y) location is perturbed according to a certain sinusoidal function. As of this paper, the waves are rather ad hoc in the sense that the user needs to modify the C++ source code of the program to change the waves. It does, however, allow for the user to easily alter the frequency, amplitude, and phase shift of existing waves without having to modify the code.

**2.4. Rendering **

In recursive ray tracing, the rendering process involves the following steps. First of all, rays, which are essentially vectors, are shot out from the camera (or eye) through points on a rectangular grid placed in front of the camera. Each of these rays is traced through the scene, during which the intersection of the ray with objects in the scene is determined. The intersection point of interest to us is the one closest to the origin of the ray. If there is no intersection, a default color specified by the user is returned. If the object intersected is reflective, a new ray will be created with a direction as given in the previous section. If the object is transparent, similarly, a new ray with direction dictated by Snell's Law will be created. For each ray where an intersection exists, the illumination at the intersection point is computed using the following equation, which is extended upon the Phong illumination model [9].

in which

*I* is the illumination in RGB value of the point P,

*k _{a}*,

*L _{a}* is the intensity of the ambient light,

*L _{i}* is the intensity of the light source

*color* is the RGB color of the surface at point P,

*vis(a,b)* is a boolean function return 1 if light source *i*
is visible from point P,

*J* is the number of light sources in the scene,

*1 / a+bd _{i}+cd_{i}^{2}*
is the attenuation mentioned in the above subsection,

max(a,b) returns the larger of a and b,

*n* is the normal vector of the surface at point P,

*l _{i}* is the negation of the incident direction of light
source

*h _{i}* is the halfway vector between the incident direction
and the viewing direction at point P,

a is the specular sharpness exponent,

*k _{r}* is the reflection coefficient,

*k _{t}* is the transmission coefficient, and

*trace(p,r)* returns the illumination RGB value returned by tracing
the ray *r* originating from the point *p*.

In order to render caustics, a caustic term is introduced into the above equation. Our program provides two different ways of computing the caustic term. Both options involve traversing the kd-tree structure of the caustic photon map to acquire the lighting information about the point P, whose illumination we are interested in.

Option 1 asks for an approximate computation of N nearest caustic photons around a point P, where N is an integer provided by the user and P is as described above. Here, we are interested in computing the photon density around the point P.

Fig1. Initial search area, the circle (actually a sphere).

The search begins by traversing the kd-tree to find N caustic photons
within a sphere S. The initial radius of S is determined by an approximation
of the diagonal distance between two points originating from two opposite
corners of a box in the grid, as shown in Fig 1. The approximation is very
rough as it uses the method of "similar triangles" to determine
the radius. Nonetheless, it really is not important for our purposes to
obtain an accurate value of the grid size at the intersection points as
we are merely trying to occasionally decrease the number of times the kd-tree
is traversesd. If less than N photons are found, the radius of the sphere
is increased by a factor of 4 and the tree re-traversed until *at least*
N photons are found. Since we are concerned about photon density and not
nearest neighbors, encountering more than N photons simply means that the
point P is a very bright spot. Thus, it is not necessary to perform further
computation to pin down the exact N nearest neighbors. Once we have located
all the photons in our region of interest, we can compute the energy density
of that region. The surface area of the region is approximated by the area
of a circle with a radius given by the final radius of the sphere S. Using
the incident direction of each photon, we can compute, according to Lambert's
law[13], the illumination in RGB value due to each photon. The caustic
illumination of the point P is then the sum of the illumination due to
all photons in the sphere S divided by the approximate surface area.

in which

*c* is the caustic illumination at the point P,

*r* is the final radius of the sphere S,

*M* is the number of photons found within the sphere S,

*n* is the normal vector of the surface at the point P,

*l _{i}* is the negation of the incident direction of the
photon

*h _{i}* is the halfway vector between the incident direction
and the viewing direction for photon

*k _{d}* is the diffuse coefficient of the surface in Phong
model sense,

*k _{s}* is the specular coefficient of the surface,

*color* is the color of the surface at the point P,

*energy _{i}* is the energy level of photon

max(a,b) is a function which returns the larger of a and b.

The second option searches for photons within a sphere S with a user specified radius instead of searching for nearest neighbors. Same as the first option, the kd-tree structure is traversed to find all photons within range. However, rather than computing the energy density, the energy level of each photon is attenuated according to its distance from the point P, whose caustic illumination is of interest. Thus, the caustic illumination equation is different from that above.

in which all the variables are the same as above except the following ones.

*M* is the number of photons found within the user specified region,

*factor* is a user specified real number which is larger than 0,

*d* is the distance of the photon incident point from the point
P, and

*a* is a user specified integer.

By adding the *c* term into the overall illumination equation as
shown above, we will be able to render caustics using our ray tracing program.
The following section shows some of the pictures rendered by our program
as well as discussions on the caustic options being used.

**3. Results**

Fig 2. Room with spheres (with caustic)

Fig 3. Room with spheres (no caustic)

Figure 2 and figure 3 show a room with a specular surface sitting on the left, four yellow transparent spheres floating in the middle, and one blue transparent sphere sitting on the right. The scene is illuminated by 2 point light sources, which are located above all the objects. One of them is located closer to the left wall, and the other one to the right wall. Figure 3 was generated with no caustic illumination. Figure 2 was generated using the energy density option of our program, as described in the previous section. N, the number of photons used to compute the photon density, was set to be 200.

Caustic due to the four yellow spheres can be noticed on all the walls. Furthermore, light rays are also focused at various points on the floor by the yellow spheres. The blue specular surface on the left reflects some amount of light to the right wall, where one could notice the slight blue illumination at the far corner. Besides that, the blue sphere focuses some of these reflected rays as well.

The rendering of figure 2 took approximately 4 hours using a Sun Sparc Ultra workstation. This is due to several reasons. First of all, caustic photons are located at all corners of the room causing the program to traverse the caustic photon map for almost every pixel being rendered. Besides that, our implementation of the photon density search re-traverses the kd-tree for every increment of the search sphere radius. Thus, for regions with low photon density, the kd-tree may need to be traversed many more times than for high photon density regions.

Fig 4. Underwater scene.

Fig 5. Another underwater scene.

The above pictures show the floor of a pool with a half sphere located at the middle of it. The light patterns on the floor are caused by the wavy water surface of the pool. For both images, the water surface is distorted by circular wave forms originating from two separate points on the surface. Figure 4 differs from figure 5 in that the locations of the two wave origins are different.

The equation used to perturb the water surface in these two images are given below.

in which

*A* is the amplitude of the wave form,

*f* is the frequency of the wave form,

phi is the phase shift,

*(a,b)* is the xy-coordinate of the origin of a wave, and

*(c,d) *is the xy-coordinate of the origin of the second wave.

The program also allows the user to specify lower bound and upper bound of the region in which the wave forms. For each wave origin, the bounds are concentric circles about the origin. Thus, by varying a combination of the bounds and f , we are able to render animation sequences of wave movements.

The underwater images were rendered using the second caustic option in which the photon search region is fixed and the energy level of each photon is attenuated by distance from the point of interest, as mentioned in the previous section. This is because the first option produces a smoother photon energy interpolation about the point of interest. So, we end up getting blurrier wave patterns on the floor. Light sources being used for these images are directional as we would like to simulate the effects of sun light in the real world. The rendering time for each of these images was approximately 15 minutes, which is significantly less than those for the room with spheres. This is due to the fact that the kd-tree needs to be searched only once for each point that is illuminated by caustic effects.

**4. Conclusions**

This paper describes a simpler variation of Jensen's photon map approach to rendering caustics using a ray tracer. We have implemented the methods shown in this paper. Furthermore, several of the images rendered by our program are shown in this paper. The lighting patterns in the scenes are discussed. Moreover, approximate rendering time for each scene is provided and discussed. The program has successfully produced images of caustic light patterns caused by wavy water surfaces. Furthermore, it provides a mechanism for generating animation key frames of wave movements. Besides that, the program has also successfully produced images containing caustic illumination due to surfaces of multiple objects.

Nevertheless, from the results we obtained, it is clear that one of the areas calling for improvements is rendering time in the case where the photon density option is used. Specifically, we need to improve on the nearest neighbor search of the caustic photon map. We would also like to implement models of various other surfaces such as quadric surfaces in order to render more interesting caustic effects. Finally, the current version of the program is not able to realistically render an underwater scene where both the floor and the water surface are visible. Therefore, it would be interesting to improve the program to allow that.

**5. References**

[1] Appel, A., **Some Techniques for Shading Machine Renderings of
Solids**, *Proceedings of Spring Joint Computer Conference*, 1968.

[2] Mathematical Application Group, Inc., **3-D Simulated Graphics
Offered by Service Bureau**, *Datamation*, 13(1), Feburary 1968,
69.

[3] Goldstein, R.A., and R. Nagel, **3-D Visual Simulation**, *Simulation*,
16(1), January, 1971, 25-31.

[4] Whitted, T., **An Improved Illumination Model for Shaded Display**,
*Communications of the ACM*, 23(6), June 1980, 343-349.

[5] Kay, D.S., **Transparency, Refraction, and Ray Tracing for Computer
Synthesized Images**, M.S. Thesis, Program of Computer Graphics, Cornell
University, Ithaca, NY, January 1979.

[6] Henrik Wann Jensen and Niels Jorgen Christensen, **Photon Maps
in Bidirectional Monte Carlo Ray Tracing of Complex Objects***, Computers
& Graphics*, 19(2), 215-224, 1995

[7] Henrik Wann Jensen, **Rendering Caustics on Non-Lambertian Surfaces**,
*Proceedings of Graphics Interface '96*, 116-121, Toronto 1996.

[8] Henrik Wann Jensen, **Global Illumination using Photon Maps***,
Rendering Techniques '96, Eds. X. Pueyo and P. Schroder. Springer-Verlag*,
21-30, 1996.

[9] Bui-Tuong, Phong, **Illumination for Computer Generated Pictures**,
*Communications of the ACM*, 18(6), June 1975, 311-317.

[10] Foley, J., van Dam, A., Feiner, S., Hughes, J., **Computer
Graphics --- Principles and Practice**, Addison-Wesley, 1996, p. 1043.

[11] Angel, E., **Interactive Computer Graphics --- a top-down
approach with OpenGL**, Addison-Wesley, 1997, pp. 398-400.

[12] Glassner, A. (ed.), **Introduction to Ray Tracing**, Academic
Press, 1989.

[13] Glassner, A., **Surface Physics for Ray Tracing**, in
*Introduction to Ray Tracing (Glassner ed.)*, Academic Press, 1989,
pp. 121-160.

[14] Sedgewick, R., **Algorithms in C**, Addison-Wesley Publishing
Co., 1990,
pp. 379-386.

[15] M. de Berg, M. van Kreveld, M. Overmars, O. Schwarzkopf,
**Computational Geometry: Algorithms and Applications**,
Springer-Verlag, 1997, pp. 97-103.