Project: Framework

Framework

The framework consists of several files:

  • raytracer.pde: This is the “main” file, and also where the Raytracer class lives.
  • Primitives.pde: Contains the class definitions for the primitives (Spheres, Cylinders, Planes, etc.)
  • CSG.pde: Contains the class definitions for the CSG operations (Union, Intersection, and Difference)
  • Transforms.pde: Contains the class definition for the transformation operations
  • Material.pde: Contains classes for surface materials and their properties
  • Lighting.pde: Contains a basic lighting model, as well as the class definition for the Phong Lighting model
  • Scene.pde: Contains base class of all scene objects, and some auxiliary classes
  • SceneLoader.pde: Contains the code to load a scene from a json file
  • util.pde: Contains some useful utility functions

You can find some more details about every file and its intended functionality below.

raytracer.pde

This file is responsible for performing the actual raytracing operation. It also contains the necessary setup and draw methods for Processing. At the very top of the file you will find two variables that you will want to change repeatedly: input determines the json file that is read by the raytracer, and output allows you to save the raytraced file to an image (it will also be shown on the screen). Note that for advanced usage, e.g. to create animation sequences, you can set the repeat variable to a value greater than 0. If you do this, the input and output strings should contain integer format specifiers, e.g. %03d, and the raytracer will create a series of images, one for each input file. iteration allows you to control which index to start with. This is also useful to run all test cases in a directory, e.g.

String input =  "data/tests/milestone1/test%d.json";
String output = "data/tests/milestone1/test%d.png";

int repeat = 14;

int iteration = 1;

This will run test1.json, test2.json, …, test14.json from the data/tests/milestone1-directory and save the results of each in a png file of the same name.

The setup and draw methods handle loading these files, and calling the raytracer in a loop and for each pixel, as necessary, and you do not have to change them.

The second part that you will touch is at the bottom of the file: The Raytracer class. getColor(int x, int y) will be called for each pixel, and will be where you should set up the Ray, intersect it with the scene, and determine which color to use by incorporating lighting information, reflection and refraction. See the description of milestone 1 for more details.

Primitives.pde

This file contains one class for each primitive object, which includes spheres, planes, triangles, etc. These primitives are the basic geometric building blocks that comprise our scene, and they all implement the SceneObject interface (which itself lives in Scene.pde). The majority of the work for milestone 1 will be to implement the intersect methods for the supported primitives. Note that by default all intersect methods return and empty result, i.e. if you include a sphere in your scene without implementing intersect in the Sphere-class, you will not see anything. The description of milestone 1 talks more about the different intersect-methods you have to implement.

CSG.pde

This file contains the constructive solid geometry operations that you will implement in milestone 2: Union, Intersection, and Difference. Each of these operations is a class that implements SceneObject, i.e. for each of them you will have to implement the intersect-method. In contrast to the primitives, for these operations you will call the intersect-methods of their children, and combine the result appropriately. Note that it already contains some code for Union, but it is not a true union: All it does is to put all results of the intersect-calls on the children and put them in one list. This means, if you have e.g. a union of two spheres that overlap, a ray that passes through both spheres would result in a list of RayHits representing [enter sphere 1, enter sphere 2, exit sphere 1, exit sphere 2], instead of the correct [enter sphere 1, exit sphere 2]. In other words, “entry” and “exit” ray-hits have to alternate in a list; if an object is entered by a ray the only thing that can happen next is that the ray exits the object again. As described on this page, if you do this correctly you can also make use of this fact in the implementation of the different operations.

Transforms.pde

This file contains the two classes corresponding to our transform operations: MoveRotation and Scaling. Each of these operations has one child, which is a SceneObject, and should perform the following sequence of actions:

  • Inversely transform the ray (i.e. bring them into the coordinate system of the child)
  • Call intersect with this transformed ray
  • Transform the RayHits (i.e. bring them into the global coordinate system)

Material.pde

This file contains the classes to represent different materials. There are three different material types: Basic (color), textured, and procedural. All three materials share a collection of basic properties (related to lighting, reflection and refraction), which are collected in the MaterialProperties class (since this collection is passed through many different functions and constructors, it made sense to collect them in a separate class rather than to put them in the Material base class). Each Material has a method getColor(float u, float v) that should return the color corresponding to the texture coordinates (u,v). The basic (and base) Material class adds a single color to the other material properties, and will always return this color for any combination of u and v. The TexturedMaterial-class, on the other hand, loads an image from a file, and returns pixels from this image depending on the u and v coordinates (u and v range from 0 to 1 and correspond to the x- and y-dimensions of the image, respectively). In Milestone 4, you will use this to texture your primitives; the main challenge will be to determine the u- and v-coordinates that correspond to a ray impact point on the 3D object.

Finally, the third Material type works slightly differently: Since getColor is a function, it can execute arbitrary Java-code, and the ProceduralMaterials make use of this fact to return color values that are calculated on the fly. Once you have determined u- and v-values for your primitives, and they work with textured materials, they will also work with the procedural materials without any further work. However, there are only two example procedural materials given, and as an optional task you can add your own. The way ProceduralMaterials are implemented is that each material consists of a subclass of ProceduralMaterial which contains the getColor-code and a subclass of ProceduralMaterialBuilder. The builder is registered with an arbitrary string name in a ProceduralMaterialRegistry, and has a method make that instantiates the actual material. Take, SineWaveMaterial for example: the getColor method returns a red color that is scaled by a value derived from the u and v coordinates as well as the current execution time (obtained with millis()), in a sine wave pattern. The SineWaveMaterialBuilder is registered with the name "SineWave" by calling super("SineWave") in its constructor. Note: An actual instance of SineWaveBuilder needs to be constructed by storing it in a variable. This variable is never actually used, but by constructing the builder it is added to the registry.

What is the result of all of this? If a json file contains a material with "type": "procedural", the "name" property refers to the name the material was registered under, e.g. "SineWave", and the scene loader will find the appropriate material builder in the registry and use it to create an instance of the procedural material and attach it to the SceneObject. To add a new material, you only need to implement these two classes (material and builder), and instantiate the builder. In perhaps more familiar terms, the ProceduralMaterialRegistry acts as a factory for procedural materials, but rather than enumerating all possibilities in a giant switch-statement (as you may have seen in the factory pattern in the past), new procedural materials can be added dynamically.

Final note: Procedural materials can do very cool things, like change their appearance depending on execution time (as both of the examples do). Normally, our raytracer does not render the image more than once (noLoop is called in setup), but with procedural materials it makes sense to render continuously. The scene loader automatically re-enables looping when it encounters any procedural material, but you can turn off this behavior by setting doAutoloop to false in raytracer.pde for better performance and/or debugging.

Lighting.pde

There are two lighting models defined in this file, basic (the default) and the phong lighting model. Of course, to model lighting, we also need actual lights, which are defined in the Light class in this file. Lights have a position and two properties: diffuse, which you can think of as the “color” of the light, and specular, which defines how the light affects specular highlights (often these two will be the same). A lighting model takes a list of lights, and allows you to determine how they affect a particular point in the scene that is hit by a ray using the getColor-method. The basic lighting model is implemented to only take into account the first light in the list, and only varies the intensity of the surface color according to the angle with which the light shines onto the surface. You will implement the Phong lighting model as part of milestone 3.

The Light class provides you with two methods: shine and spec. Each of these uses the respective light property to scale a surface color value. The way you use this is: first you determine the color value of the pixel (using the Material’s getColor-method) at the u/v-coordinates of the ray impact point, then you pass it to shine (or spec), and then you use the resulting color in the lighting model as appropriate. The basic lighting model does this with color surfacecol = lights.get(0).shine(hit.material.getColor(hit.u, hit.v)); (it does not do specular highlights, and thus never calls spec).

Scene.pde

This file defines two important classes and an interface. You should not have to modify this file at all, but you will be working with objects of these types throughout the project.

RayHit

A RayHit object represents when one of your rays intersects one of your objects. Each hit can either be an “entry”, i.e. the ray entering the surface from outside, or an “exit”, when the ray leaves the object again.

  • float t: Recall the ray equation: $r(t) = o + t\cdot \vec{d}$. This means that a ray is the set of all points on a line that starts at an origin $o$ and goes in direction $\vec{d}$. Each point on this ray is associated with a parameter value $t$, which is the distance that point has from the origin (if $\vec{d}$ is normalized, which it should always be; make sure it is).
  • PVector location: The ray hit occurs at a certain location in space; from the origin and direction of the ray, and the value of $t$, this can be calculated, but for efficiency reasons it makes sense to store it explicitly rather than recompute it every time it is needed.
  • PVector normal: This is the normal vector of the surface that is hit by the ray.
  • boolean entry: This is true if the RayHit represents the ray entering the object, otherwise it is false.
  • Material material: This is a reference to the material of the object that was hit.
  • float u,v: These are the texture coordinates of the point on the surface that was hit. You will compute these values in milestone 4.

Scene

The Scene represents the information loaded from the json file. It consists of the objects that actually make up the scene, as well as properties that define how it should be rendered.

  • LightingModel lighting: Contains an instance of one of the two lighting models, basic or phong. Call getColor to determine how particular pixel should be shown.
  • SceneObject root: Contains the actual scene geometry. Note that while this is a “single” object, it may be, e.g. a Union (see above) object that represents a collection of child objects. In other words, the scene is a tree, and this is the root of that tree.
  • int reflections: Defines how many levels of reflection should be performed. You will implement reflections in milestone 3
  • color background: Defines the background color, i.e. what you should use if a ray does not hit any object. This is the default return value for the Raytracer’s getColor method.
  • PVector camera: Defines where the camera is located in the scene. Initially, you will assume that this is always (0,0,0), but changing this to allow moving the camera is straightforward: You only need to choose a different origin for your ray.
  • PVector view: This defines the “forward” direction of the camera, i.e. it allows you to rotate the view. This is a bit (but not much) trickier than moving the camera, and is therefore available as a bonus task.
  • float fov: The field of view of the camera. By default, you can assume this is 90 degrees, but as part of the camera bonus task you can also support other values.

SceneObject

Finally, each scene, as mentioned, contains objects which are organized in a hierarchy. You don’t actually need to know the hierarchy (and hence SceneObject does not even give you access to the children of a node), but you need to be able to intersect the scene with a Ray, and this is the only thing this interface allows you to do. Note that there is a list of primitives, another list of aggregation operations, and a third list of transformations you can perform on objects, but you can add other node types as you see fit (be sure to talk to me beforehand to discuss any eventual bonus points). Maybe you want to load 3D models from a .obj-file; this is “just” tedious, but basically results in a bunch of triangles, which you already support; or maybe you want to take a shot at the torus intersection.

SceneLoader.pde

This file contains (very tedious and repetitive) code to load a Scene from a given json file. The main function is loadScene, which then loads the individual pieces from the json file using a list of makeX-functions, where X can be any of LightingModel, Light, SceneObject, Material, Color, Vector, and so on. You should not have to modify this file at all, unless you plan on adding additional features to the json file (e.g. a Torus).

util.pde

This is a file you might want to take a look at, as the functions defined therein may be useful in various contexts. They are all very short and simple functions:

  • int r(color c), int g(color c), and int b(color c): Gives you access to the red, green and blue components of a color. Note that Processing provides functions red, green and blue that do the same, but Processing itself says that the bitshifting version used here is faster.
  • int clamp(int x, int low, int high): Ensures that an integer value fall within the given range, i.e. this function returns x if it is between low and high; otherwise it will return low, if x is less than low, and high otherwise. low must be less than high or you may get strange results.
  • color scaleColor(color c, color scale): Allows you to “scale” a color, by using another color as the scaling factor. The RGB values of scale are divided by 255.0 and then multiplied with the corresponding RGB values of c.
  • color addColors(color x, color y): Allows you to add two colors together, ensuring that the result stays in the valid range, i.e. RGB values are clamped to the range from 0 to 255.
  • color multColor(color c, float a): Multiplies each component of a color with a floating point value and ensures the resulting value is valid/clamped to the range from 0 to 255.
  • float sgn(float x): The “sign” of x; returns -1 if x is negative, 1 if it is positive, and 0 if it is zero.
  • float EPS: A small floating point value you can use as an offset for e.g. shadow rays.