
A "renderer" is a program (or a piece of computer hardware, like a graphics card) that takes as input a "scene data structure" and produces as output a "framebuffer data structure".

A scene data structure contains information that describes a "virtual scene" that we want to take a "picture" of. The renderer is kind of like a digital camera that takes a picture and produces a framebuffer data structure that holds the pixel information that is the picture of the scene.

Our software renderer is made up of the code in the Scene and SceneRender folders. The Scene folder holds all the data structures needed by the renderer. The SceneRender folder is all the algorithms that manipulate the data structures from the Scene folder.

We will discuss the algorithms in the SceneRender folder as the semester progresses. Describing these algorithms is where most of the effort is in understanding how a renderer works.

Here is a brief description of the data structures in the Scene folder.

   A Scene object has a Camera object and a list of Model objects.

   A Model object has a color and a list of LineSegment objects.

   A LineSegment object has an array of two Vertex objects. (The two Vertex objects represent the two endpoints of a line segment in 3-dimensional space.)

   A Vertex object has four doubles. (Note: A Vertex object represents a point in 3-dimensional space, so we would expect it to hold three doubles. But for technical reasons that we will explain in detail later on, we use 4-dimensional "homogeneous coordinates" to represent points in 3-dimensional space.)

   A Camera object has two Matrix objects. One matrix is called the "view matrix" and it determines where the camera is located in space and where the camera is pointed. The other matrix is called the "projection matrix" and determines the camera's "view volume", which we will explain later.

   A Matrix has four Vectors, which represent the columns of the matrix (as in Linear Algebra class).

   A Vector has four doubles. We use "homogeneous vectors" with four coordinates, just as we use four coordinates in each Vertex.

   A FrameBuffer represents a two-dimensional array of pixel data. Pixel data represents the color of each pixel in the image that the render produces.


Here is a brief overview of how the algorithms in the SceneRender folder process a Scene object to produce a filled in FrameBuffer object.

First of all, remember that:
   A Scene contains a list of Model objects.
   A Model contains a list of LineSegment objects.
   A LineSegment contains two Vertex objects.
   A Vertex contains three numbers in "world coordinates".

The main job of the renderer is to draw, in the FrameBuffer, the appropriate pixels for each LineSgement that is in the Scene. The renderer does its work as a "pipeline" of stages. This simple renderer has just five pipeline stages. The most important stages that a LineSegment object passes through can be summarized as
     transform,
     project,
     clip,
     rasterize
To understand the algorithms used in this "transform, project, clip, rasterize" process, we need to trace through the "rendering pipeline" what happens to each LineSegment object.


Start with a single LineSegment object (from some Model in the Scene). The LineSegment has two vertices that we will label v0 and v1. This LineSegment is the input to the rendering pipeline, which has the following five stages.

          v0 v1      A LineSegment object
           \ /
            |
            | world coordinates (of v0 and v1)
            |
        +-------+
        |       |
        |   P1  |    View transformation (of the vertices)
        |       |
        +-------+
            |
            | view coordinates (for v0 and v1)
            |
        +-------+
        |       |
        |   P2  |    Projection transformation (of the vertices)
        |       |
        +-------+
            |
            | clip coordinates (for v0 and v1)
            |
           / \
         /     \
       /    P3   \   Clipping (of the line segment)
       \         /
         \     /
           \ /
            |
            | normalized device coordinates (NDC)
            |
        +-------+
        |       |
        |   P4  |    Viewport transformation (of the vertices)
        |       |
        +-------+
            |
            | device coordinates (for v0 and v1)
            |
           / \
         /     \
       /    P5   \   Rasterization (of the line segment)
       \         /
         \     /
           \ /
            |
            |  pixels (for this line segment)
            |
           \|/
        FrameBuffer