Project: Camera Placement and Rotations

Camera Placement and Rotation (optional)

If you have been following the suggested approach, your Raytracer class will first set up the ray as an origin (initially (0,0,0)), and a direction, based on the x,y-coordinates of the pixel. Then you check if that ray intersects the scene. In this task you are going to allow placing the camera at an arbitrary position, and rotating it into an arbitrary direction. The first part should be straightforward, since you only need to change the origin of the ray (all formulas we have take the origin into account, so it being at (0,0,0) was pure “coincidence”).

Viewing direction

In order to change the viewing direction, consider how you determined which rays to shoot so far: You moved the same distance “forward” and to the “left” (for the left-most pixel), to get the 45 degree viewing angle (see below for other fov values). “Forward” was (0,1,0), since we said that the camera was looking in that direction, and “left” was (-1,0,0), i.e. along the x-axis (because we wanted to keep the z-axis (0,0,1) as “up”). We can use the same idea for an arbitrary viewing direction: To get the left-most pixel we move the same number of units “forward” as we move “left” (seen from the perspective of the camera), we just need to determine these local axes. “forward” is easy, since it is the given view direction (in our json file/Scene object). Now, for “left” we have infinitely many choices (we know which way is forward, but the camera may rotate arbitrarily around that axis). The “trick” we are going to use is to “anchor” the camera rotation using the global z-axis: “left” is then the direction that is orthogonal to the plane spanned by the camera’s forward direction and the global “up” direction. How can we get a vector that is orthogonal to two others? We use the cross product! Note that the order is important, you want to use the z-axis as the first operand and the view-direction as the second one to get a vector that actually points “left”. Finally, we still need an “up”-direction. If the camera is looking down at an angle, the global z-axis will not lead to correct results (it was useful to anchor the rotation, but not for more than that). What we want is a vector that is orthogonal to the forward/left-plane of the camera, which we can again get using the cross product (using forward times left).

Make sure to normalize the forward (which should already be normalized), left and up-vectors and you are good to go: To determine the ray direction, take the left- and up-offsets you used before, and multiply them with the "left" and "up" vectors.

Field of view

So far, our field of view was always 90 degrees, or 45 degrees in each direction. The effect this had was that for every unit we moved forward, we also moved one unit to the left (for the rays corresponding to the left-most pixel). However, if we want to have a narrower field of view, we would effectively need to move fewer units left than we move forward, and if we want a wider field of view, we need to move more units to the left than forward. It so happens that the ratio of how many units we need to move left for every unit we move forward is precisely the tangent of (half) the field-of-view angle. The tangent of 45 is 1, i.e. we move one unit left for every one unit forward. The tangent of 35 degrees would be about 0.7, i.e. for every unit we move forward, we would only move 0.7 units left. And if we want a wider field of view, say, 120 degrees, we would calculate the tangent of 60 (half the angle), which is 1.7, so we would move 1.7 units left for every unit we move forward.

To incorporate this into your raytracer, you really only need to multiply the factor you used for the “left”- and “up”-vectors in the previous step by half of the tangent of the given angle.