Project: Movement and Rotations
Transformations (optional)
So far, we can place spheres, planes and triangles wherever we want, but we can not do the same with cylinders or other primitives. Instead of deriving (perhaps complex) formulas for ray-intersections with these primitives in arbitrary positions, we will add a new type of scene object that moves and/or rotates another scene object. The “trick” that we are using here is that these transformations do not actually change how the ray-intersections are calculated, but instead move and rotate the ray instead of the object. For example, if we want to have a cylinder centered at (0,100,0)
while the camera is at (0,0,0)
we can instead also “pretend” that the camera is at (0,-100,0)
and the cylinder at the origin. These transformed coordinates are therefore relative to the cylinder, it its local coordinate system. To use this in our raytracer, we follow three steps:
- Transform the ray into the local coordinate system
- Calculate the ray intersections in the local coordinate system
- Transform the ray intersections back into the original coordinate system
What we need to take into account when performing these transformations are the order in which they are performed, and which quantities we should transform (and which ones we should not). As a rule of thumb: Positions should be rotated and moved, while directions should only be rotated.
Important: Our transformations are defined to be performed in this order:
- Rotate around the z-axis
- Rotate around the x-axis
- Rotate around the y-axis
- Translate
When we want to perform the inverse transformation, we have to perform these steps in reverse order.
The MoveRotation
class therefore performs the following steps in its intersect
-method:
- Create a new ray by applying the inverse transformation to the ray origin, and (only) the inverse rotationto the ray direction
- Call
intersect
on the child. - For each returned
RayHit
: Apply the transformation to thelocation
, and (only) the rotation to thenormal
vector. - Return the transformed
RayHit
s.
The only file you need to change for this task is Transforms.pde
. Note that you can also optionally implement the Scaling
-class which adds the ability to scale objects in arbitrary dimensions (to e.g. turn a sphere into an ellipsoid).
Translation
Translation is the simpler of the two types of transformations that we are concerned with here. A translation can be performed in each of three cardinal directions x
, y
and/or z
. Unlike for rotations, the order the translation is performed in does not matter, and indeed we can just take a position and add the translation as a vector to it. To perform the inverse translation of (x,y,z)
we use the vector (-x,-y,-z)
.
Rotation
As noted above, rotations are performed in a specific order, and this order matters, as rotating around an axis will change the direction the other (local) axes are oriented, and our rotations are always performed around the global axes. Consider a cylinder that is upright (i.e. the z
-axis is the center of the cylinder). If we rotate this cylinder around the z
-axis, nothing visible happens. If we can rotate it around the y
-axis, it will be tilted towards or away from the camera (assuming our default camera placement of (0,0,0)
looking in direction (0,1,0)
. However, if we first rotate it around the y
-axis (leading to it being tilted towards the camera), and then rotate it around the (global) z
-axis, it will suddenly visible turn, and might even end up being tilted sideways. This is why having a defined order of rotations is essential to getting consistent results.
Now, how do we actually rotate a vector around an axis? The first observation we can make is that if we rotate a vector around, say, the z
-axis, the z
-coordinate will stay the same, and only the x
and y
-coordinates will change. In order to do this rotation, we could look at x
and y
as the legs of a triangle, calculate the angle they define along a coordinate axis, then add the angle $\alpha$ we want to rotate by to that angle, and compute new values x'
and y'
. The Game Math Book defines these rotations in terms of matrizes, but for our purposes, we can extrac the equations for what the new coordinate values are going to be directly (note that the book rotates in the opposite direction, which is why the signs are different here):
Rotation around the z
-axis:
Rotation around the y
-axis:
Rotation around the x
-axis:
To perform the inverse transformation, rotate by $-\alpha$ instead. I would recommend implementing individual methods that apply each of these transformations, and then calling them in the correct order.
Note: You may encounter functions like rotateX
in the Processing reference. These are not useful here.
Scaling
In addition to the "moverotation"
transformation, there is also a "scaling"
transformation. This transformation contains a single attribute, "scaling"
(a vector), which defines how each of the three dimensions are to be scaled by the transform. The implementation is analogous to the movement and rotation (but the actual operation is much simpler):
- Apply the inverse scaling to the ray (origin and direction)
- Intersect with the
child
node - Apply the scaling to the result (impact location and normal vector)
To apply a scaling operation, multiply each component of a vector with the corresponding scaling factor. The inverse scaling operation is then a division by that same factor (note: scaling factors should never be 0, as you would then divide by 0).