Project: Reflection and Transparency

Reflection and Transparency

So far (if you are following the “recommended” implementation order), our raytracer is able to compose multiple primitives into a scene, and use the phong lighting model to make them appear in various degrees of “shininess”. However, there are two other properties of surfaces that a raytracer is very suitable to model: Reflectiveness and transparency. We put both of them under one umbrella, because they require similar techniques to be handled. The general idea is that a reflective surface will show a “mirror image” of the rest of the scene. In order to model this, we can take the incoming ray and reflect it along the surface normal and whatever this reflected ray hits will determine what the reflection shows. Transparency, on the other hand, causes refraction: When a ray hits a transparent surface, it is “bent” slightly, depending on the material properties (this is exactly what makes a spoon appear bent when you put it into a glass of water). In either case, we perform the original raytracing operation, hit a surface and then, depending on the reflectiveness and transparency of that surface, shoot another ray into the scene. Of course, this new ray may again be reflected and/or refracted when it hits the next surface. In order to avoid looping infinitely we usually limit the number of reflections (refracted rays are only bent “slightly” and should not continue infinitely, barring any very peculiar lens setups).

Reflection is fairly straightforward to implement, as it only requires shooting additional rays from a given location and into a defined direction, and combining the result with the surface color based on the reflectiveness of the material. Transparency adds an additional challenge, as rays must be classified as “inside” and “outside” volumes (each volume is associated with a refraction index, and the amount the ray is refracted depends on the ratio of these indices of the material the ray leaves and the one it enters). Finally, and I do not recommend trying to handle this, reflection and transparency also interact with lighting in particular ways (recall that we used shadow rays to determine if a light source hits a material; what if the object obscuring the light source is - perhaps only partially - transparent? Even more challenging, maybe a light source is reflected by another surface and shines at a surface from two directions).

Reflection

There are two principle approaches to handling reflection: You could do this inside the intersect-method, but then each new primitive would need to be aware of reflection. The more flexible solution is to put reflection (and subsequently refraction) directly into the Raytracer class. After you shoot the ray into the scene, when you look at the first RayHit in addition to referring to the lighting model what the surface looks like at this point, you also check if the reflectiveness property of the surface material is greater than 0. This property tells you how to combine the surface color and the result of the reflection ray: If it is 0, the surface is not reflective at all, and you do not need to (and shouldn’t) shoot any reflection ray. If it is 1, you have a perfect mirror and you can ignore the surface color and use the color given by the reflection ray. In all other cases, you will need to “mix” the two colors, which you can do with the lerpColor-function. The Scene object also has a parameter reflections that tells you how often a single ray should be reflected.

To calculate the reflection ray, you can use the same formula as in the phone lighting model (where we used it for the specular component to reflect the light): $R = 2\vec{N}(\vec{N}\cdot \vec{V}) - \vec{V}$, where N is the normal vector and V the vector that points to the viewer (i.e. the opposite direction of the incoming ray). You then shoot this ray into the scene from the impact location, again using a small EPS distance from the surface to avoid hitting the same object again, noting the appearance of this surface (using the lighting model!), and repeat this process as long as you hit a reflective surface and you haven’t exceeded the reflections limit.

Transparency (optional)

As noted above, transparency is almost the same as reflection, except that the ray continues into the hit object, and then exits it again at the opposite end. Upon entry and exit, the ray is slightly redirected, though, depending on the material of the object. Just as before, the first step is to shoot the ray into the scene, and when it hits an object, you check the transparency value of the hit surface material. If this value is 0, the object is completely opaque and you don’t need to do anything else. Otherwise, you need to compute the refracted ray. Snell’s law states that:

\[\eta_1 \sin(\theta_1) = \eta_2 \sin(\theta_2)\]

Where the two $\eta$ (“eta”, which we will call h in code) are the refraction indices of the origin ($\eta_1$) and destination ($\eta_2$) materials. We assume that air/the environment has a refraction index of 1 (water, for example, has about 1.33, while diamond has about 2.42). $\theta_1$ is the angle the incoming ray has with the normal vector, and $\theta_2$ is the angle the refracted ray has below the surface with the (inverse) normal vector. Given the incoming ray $i$ and the normal vector $n$, you can get the refracted ray $t$ as:

\[t = \frac{\eta_1}{\eta_2}i + ((\frac{\eta_1}{\eta_2} \cos (\theta_1) - \sqrt{1 - \sin^2(\theta_2))})n\]

Where $\cos(\theta_1) = -i \cdot n$ and $\sin^2(\theta_2) = \left(\frac{\eta_1}{\eta_2}\right)^2 (1-\cos^2(\theta_1))$. One important thing to note is that the value under the square root may be negative. In this case, there should not be any refracted ray, as the incoming ray was at an angle where it will not pass through the surface. If you want to see how these formulas are derived, you can refer to this pdf.

Once you have the refracted ray, you, again, shoot it into the scene, although you will want it to start slightly inside the volume. When it then exits the volume again, you perform the same calculation (where the refraction indices are reversed, and the normal vector negated/pointing inward), and get a new ray. This ray may then hit another object, which may, in turn, be transparent again (or not), and so on. Each time you pass such a barrier, the transparency value determines how you interpolate the color of the surface with whatever the refracted ray ultimately returns.