Vectors

This blog post will talk about some of the most common operations that can be performed on a vector. Here we will refer to all vectors as vectors in 3D-space. The operations are for the most part the same in all ND-spaces.

Addition
When you’re adding two vectors together, you’re performing a component-wize addition on the two vectors. If you’re adding the vector A (1.0, 2.0, 3.0) to the vector B (4.0, 5.0, 6.0), you’d get a vector C (5.0, 7.0, 9.0). This addition may look like C = A + B.

float AX = 1.0;
float AY = 2.0;
float AZ = 3.0;

float BX = 4.0;
float BY = 5.0;
float BZ = 6.0;

float CX = AX + BX;//5.0
float CY = AY + BY;//7.0
float CZ = AZ + BZ;//9.0

Subtraction
When you’re subtracting two vectors, you’re performing a component-wize subtraction on the two vectors. If you’re subtracting the vector B (4.0, 5.0, 6.0) from the vector A (5.0, 7.0, 9.0), you’d get a vector C (1.0, 2.0, 3.0). This subtraction may look like C = A – B.

float AX = 5.0;
float AY = 7.0;
float AZ = 9.0;

float BX = 4.0;
float BY = 5.0;
float BZ = 6.0;

float CX = AX - BX;//1.0
float CY = AY - BY;//2.0
float CZ = AZ - BZ;//3.0

Multiplication
When you’re multiplying two vectors together, you’re performing a component-wize multiplication on the two vectors. If you’re multiplying the vector A (2.0, 2.0, 2.0) with the vector B (3.0, 3.0, 3.0), you’d get a vector C (6.0, 6.0, 6.0). This multiplication may look like C = A * B. Vector multiplication is also possible with a vector and a scalar value (where a scalar value means any number).

float AX = 2.0;
float AY = 2.0;
float AZ = 2.0;

float BX = 3.0;
float BY = 3.0;
float BZ = 3.0;

float CX = AX * BX;//6.0
float CY = AY * BY;//6.0
float CZ = AZ * BZ;//6.0

Division
When you’re dividing two vectors, you’re performing a component-wize division on the two vectors. If you’re dividing the vector A (6.0, 6.0, 6.0) with the vector B (3.0, 3.0, 3.0), you’d get a vector C (2.0, 2.0, 2.0). This division may look like C = A / B. Vector division is also possible with a vector and a scalar value (where a scalar value means any number).

float AX = 6.0;
float AY = 6.0;
float AZ = 6.0;

float BX = 3.0;
float BY = 3.0;
float BZ = 3.0;

float CX = AX / BX;//2.0
float CY = AY / BY;//2.0
float CZ = AZ / BZ;//2.0

Length
The length of a vector, which is a scalar value and not a vector, is the square root of X * X + Y * Y + Z * Z. Sometimes the length squared is required, in which case the square root is omitted.

float AX = 1.0;
float AY = 1.0;
float AZ = 1.0;

float ALength = Sqrt(AX * AX + AY * AY + AZ * AZ);

Dot Product
The dot product between the two vectors A and B, is A.X * B.X + A.Y * B.Y + A.Z * B.Z. The dot product is a scalar value, not a vector.

float AX = 1.0;
float AY = 1.0;
float AZ = 1.0;

float BX = 1.0;
float BY = 1.0;
float BZ = 1.0;

float DotProduct = AX * BX + AY * BY + AZ * BZ;//3.0

Normalizing a Vector
To normalize a vector, simply divide the vector by its length.

float AX = 1.0;
float AY = 1.0;
float AZ = 1.0;

float ALength = Sqrt(AX * AX + AY * AY + AZ * AZ);

float BX = AX / ALength;
float BY = AY / ALength;
float BZ = AZ / ALength;

Cross Product
The cross product between the two vectors A and B is a vector C. The vector C is defined by C.X = (A.Y * B.Z – A.Z * B.Y), C.Y = (A.Z * B.X – A.X * B.Z) and C.Z = (A.X * B.Y – A.Y * B.X).

float AX = 1.0;
float AY = 1.0;
float AZ = 1.0;

float BX = 1.0;
float BY = 1.0;
float BZ = 1.0;

float CX = AY * BZ - AZ * BY;
float CY = AZ * BX - AX * BZ;
float CZ = AX * BY - AY * BX;

Optimizations
When you write the code yourself, you may want to use a few optimizations. One of them concerns division, because division tends to be slow. If you’re dividing by the same value more than once, you can always calculate its reciprocal (inverse) and then use that reciprocal value to multiply with instead.

float AX = 1.0;
float AY = 1.0;
float AZ = 1.0;

float ALength = Sqrt(AX * AX + AY * AY + AZ * AZ);
float ALengthReciprocal = 1.0 / ALength;//Always 1.0 / X

float BX = AX * ALengthReciprocal;//Instead of AX / ALength
float BY = AY * ALengthReciprocal;//Instead of AY / ALength
float BZ = AZ * ALengthReciprocal;//Instead of AZ / ALength

Points vs. Vectors

This article assumes you’re somewhat familiar with ND-spaces (where N can be replaced by some arbitrary number). The two ND-spaces we’ll talk about are 2D-space and 3D-space.

When you’re dealing with 3D graphics, the most common concepts you’ll come across are points and vectors. They’re used pretty much everywhere. But what are they, and what differences are there between the two?

As you might know, both points and vectors are represented by their coordinates, such as (X, Y) in 2D-space or (X, Y, Z) in 3D-space. They do look very similar, but they are distinct concepts.

A point represents a point in ND-space. It practically has no area, so it’s infinitesimal.

A vector represents a direction and a length in ND-space. How can it represent a direction? Doesn’t that require two points in ND-space? Yes it does. One of the two points is implicit, whereas the other, the one you specify yourself, is explicit. The implicit point is at the point that represents the origin in that ND-space. In 2D-space the origin is (0.0, 0.0) and in 3D-space the origin is (0.0, 0.0, 0.0). If you specify a vector in 3D-space as (1.0, 2.0, 3.0), that means you’re specifying a vector that starts at the point (0.0, 0.0, 0.0) in 3D-space and is directed towards the point (1.0, 2.0, 3.0) in 3D-space.

What is Path Tracing?

Path tracing is a variation of a Monte Carlo method-based ray tracing algorithm, that finds a numerical solution to the integral of the rendering equation. If done correctly, this will result in an image that is indistinguishable from a photograph. Both the rendering equation and path tracing were presented by James Kajiya in 1986.

The rendering equation is an integral equation with basis in physics. It follows the law of conservation of energy for radiance; the radiant flux emitted, reflected, transmitted or received by a surface.

By using path tracing, the image will get a lot of effects for free. Some of them are global illumination, reflections, refractions, caustics and color bleeding. Global illumination is direct and indirect illumination. Direct illumination is when light hits a given surface directly, whereas indirect illumination is when light bounces around and finally hits a given surface. Reflections occur when light hits a reflective surface, such as a mirror. Refractions occur when light hits a refractive surface, such as glass, where it enters the surface. Caustics occur when light is concentrated. It happens when light is reflected or refracted by a curved surface. Color bleeding occurs when the color of a surface, or albedo as it is commonly referred to, can be seen on another surface. If one surface is green and another white, you may be able to see a green tint on the white surface.

Normally, when using ray tracing, you’d add lights to your scene to see anything. This is not necessary in path tracing. Any object in the scene can act as an emitter of light. If you add explicit lights, you cannot add point lights because they have no area. The probability to hit a point light would be zero.

A potentially disturbing side-effect of path tracing is noise. The more samples you use, the less noise there will be in the final image. But this takes time. There are many ways to reduce noise, such that less samples are required for approximately the same end result. One way to reduce it, is to use larger light-emitting objects, such as a sky. The probability for a ray to hit a larger object is higher than that of a smaller one.

Just as with ray tracing, there are many variations of path tracing. One is bidirectional path tracing.

This blog post will not cover the exact details for an implementation. But some pseudo code will be posted here below, so you can see the overall picture.

void render() {
    Camera camera = createCamera();
    
    World world = createWorld();
    
    Display display = createDisplay();
    
    Sampler sampler = createSampler();
    
    for(int y = 0; y < display.getHeight(); y++) {
        for(int x = 0; x < display.getWidth(); x++) {
            Color color = Color.BLACK;
            
            for(int sample = 0; sample < SAMPLE_COUNT; sample++) {
                Sample2D sample2D = Sample2D.toExactInverseTentFilter(sampler.sample2D());
                
                Ray ray = camera.newRay(sample2D.u + x, sample2D.v + y);
                
                color = color.add(integrate(ray, world));
            }
            
            color = color.divide(SAMPLE_COUNT);
            
            display.update(x, y, color);
        }
    }
}

Color integrate(Ray ray, World world) {
    int currentDepth = 0;
    
    Ray currentRay = ray;
    
    Color color = Color.BLACK;
    Color radiance = Color.WHITE;
    
    while(currentDepth++ < MAXIMUM_DEPTH) {
        Intersection intersection = world.intersection(currentRay);
        
        if(intersection.isIntersecting()) {
            Primitive primitive = intersection.getPrimitive();
            
            Material material = primitive.getMaterial();
            
            Texture textureEmission = primitive.getTextureEmission();
            
            Color colorEmission = textureEmission.getColorAt(intersection);
            
            color = color.add(radiance.multiply(colorEmission));
            
            Result result = material.evaluate(radiance, currentRay);
            
            radiance = result.getRadiance();
            
            currentRay = result.getRay();
        } else {
            break;
        }
    }
    
    Intersection intersection = world.intersection(currentRay);
    
    if(!intersection.isIntersecting()) {
        color = color.add(radiance.multiply(world.getBackground().radiance(currentRay)));
    }
    
    return color;
}

Using my own renderer, with code similar to the one above, I produced the following image. It took a few seconds to generate, so you may still see some noise in there. But, had I not used a sky, you’d see a lot more noise.

Image Order Algorithms vs. Object Order Algorithms

Both image order algorithms and object order algorithms are used for rendering. But what’s the difference?

First of all, lets talk about image order algorithms. These algorithms iterate over the pixels in an image and computes the colors of said pixels. The first example that comes to mind is any of the ray tracing algorithms out there. In addition, these algorithms are pretty well suited for OpenCL.

Now lets look at object order algorithms. They iterate over the objects in the scene and attempts to find out what pixels they occupy in the final image, if any. Once they are known to occupy at least one pixel in the image, the color is computed. One algorithm that does this is a scanline rasterizer. These algorithms are the ones primarily used in DirectX and OpenGL.

So, now that we know the main differences, are there any advantages and disadvantages to either? Yes, there are. Because an image tends to have a lot of pixels, usually more so than there are objects in a scene, a lot of time is wasted iterating over all pixels. Not to mention all intersection tests that are required by ray tracing algorithms. So for realtime graphics, object order algorithms are the preferred way. But, object order algorithms tend to complicate things a lot when it comes to photo-realistic rendering. This is where ray tracing algorithms really shine. For instance, do you want to add reflection, refraction or global illumination? Then, essentially, just send more rays and that will solve the problem.

What is Ray Tracing?

Ray tracing, as a term, can be found in both computer graphics and physics. However, this blog post will only focus on the computer graphics part of it.

So then, what is ray tracing? Ray tracing is an umbrella term for a wide variety of image order algorithms. An image order algorithm iterates over the pixels in an image to produce it. Contrast that with the object order algorithms, that iterate over the objects in the scene in order to produce the final image. However, this is only half the story. The other half is how to produce the final image.

To produce the final image with ray tracing, a ray is computed for each pixel and sent into the scene. The computation of the ray is based on a virtual camera in the scene and a plane defined by the image itself. When the ray has been sent into the scene, it is tested for intersections with all objects in the scene. If the ray intersects an object, the color of the pixel can be calculated by some shading algorithm. If the ray intersects more than one object, make sure the closest object is used in the shading process. If it did not intersect an object, use a constant color, such as black, or a background image to calculate the color of the pixel.

So what should the shading algorithm do? Its purpose is to calculate the color of the pixel based on an intersection with an object. So it could do as little as return a constant color for a specific object, or it could do a lot more. The sky is the limit. However, a constant color is a good start. Once you’ve come this far, you can add lights to the scene and send so called shadow rays from the intersection point of the object to the lights. If there are no intersections between the two, the light hits the object and thus it can be seen from the virtual camera and shaded accordingly.

What is described above is often called ray casting. That is, rays sent from the camera into the scene, which are often called primary rays. From here it is possible to create other ray tracing algorithms, such as Whitted ray tracing, which sends secondary rays from the surface intersection point of the object in order to calculate reflections and refractions. There are many more variations out there, some using so called Monte Carlo methods, which essentially sends primary rays in seemingly random directions and calculates an average pixel color.