University of Maryland at College Park

- I. Introduction
- II. Vector Representation
- III. Matrix Representation
- IV. Coordinator Frame
- V. Viewer Frame
- VI. 3D Volume Clipping
- VII. Perspective Transformation
- VIII. Hidden Surface/Line Removal
- IX. Light and Color
- X. Data Structure Used
- XI. Putting All Together
- XII. Conclusion
- XIII. Resources
- IXV. Acknowlegement

A better way to do it is to append an extra element at the end of the array. For scalers, the 4th element will be 1, for vectors, the 4th element will be 0. For example, A = (x, y, z, 0) indicates that A is a vector, and B = (x, y, z, 1) indicates that B is a scaler/point.

We know that if a point subtracting a point will become a vector, a vector add a scale will still be a vector. We can see that these operations still hold true for the above representation. There are also some illegal operations, such as adding to scalers. These operation can also be checked if using the above representation.

Therefore, you can see the nice property about this particular representation. Every legal operation to be performed on points and vectors, can be implemented by performing the corresponding arithmetic operation on their coordinate lists.

- if the last element is "1": Result is a
**Point** - if the last element is "0": Result is a
**Vector** - else: This was a
**illegal**operation

Take a look of the source code for detailed implementation on vectors.

Each operation of object such as translation and rotation can be represented by one Matrix. By multiplying the matrix, a vector/point can be transformed by the operation. For example, P = (x, y, z, 1) is a point's coordinator before the operation, and M = (1 0 0 a)(0 1 0 b)(0 0 1 c) (0 0 0 1) is the matrix representation of translating the object a blocks right, b blocks up and c blocks forward, then after the translating, the point's coordinator will become P' = P*M is the coordinator of the point after the operation.

A sequence of operation can also be composed to be one operation. For example, matrix M1 represent first operation, matrix M2 represent second operation, the matrix M = M1*M2 will represent this sequence of 2 operations. If P = (x, y, z, 1) is the coordinator of the point before perform the sequence of the operations, then P' = P * M will be the coordinator of the point after the perform the operation1 then perform the operation2.

Following are matrix representation for some operations:

- Translate object by V, where V is the vector:

M = (1 0 0 V[0]) (0 1 0 V[1]) (0 0 1 V[2]) (0 0 0 1) - Rotate object along X axis by d degree:

M = (1 0 0 0) (0 cos(d) -sin(d) 0) (0 sind(d) cos(d) 0) (0 0 0 1) - Rotate object along Y axis by d degree:

M = (cos(d) 0 sin(d) 0) (0 1 0 0) (-sin(d) 0 cos(d) 0) (0 0 0 1) - Rotate object along Z axis by d degree:

M = (cos(d) -sin(d) 0 0) (sin(d) cos(d) 0 0) (0 0 1 0) (0 0 0 1)

Take a look of the source code for more matrix representation of operations.

**X (1, 0, 0)**: points up**Y (0, 1, 0)**: points right**Z (0, 0, 1)**: points forword

- The
**world**coordinator frame, which is the common coordinator frame. - The
**viewer frame**coordinator frame, which is a coordinator frame relative to the camera's point of view. This frame will be explained in later section. - The
**viewer window (canvas)**coodinator frame. This uses standard canvas board coordinator frame. It's origin is at left upper corner, and x direction points to right, y direction points downward.

The answer is through matrix multiplication. As explained in section III, in
particular, we can define a matrix M which maps points represented in G's
coordinator frame to their representation in F's coordinator frame, such that
for any geometric object v (point or vector) we have **v[F] = v[G]*M**.

What is M? Let G.O[F] be the origin of the frame G relative to frame F, and let
G.b0[F], G.b1[F], G.b2[F] be the basis vectors of frame G relative to frame F.
Then **M = (G.b0[F]) (G.b1[F]) (G.b2[F]) (G.O[F])**. By using above matrix
multiplication, we can transform a coordinator which relative to frame G to
a coordinator which relative to frame F. By the same token, **v[G] = v[F] *
M'**, where M' is the inverse matrix for M. And now we can also transform
coordinator from relative to frame F to coordinator relative to frame G.

The viewer frame coordinator system will be explained in the next section.

- Center of Projection(
): This is a point in 3D space. It is usually where human eye is. This point is also considered as the origin of the viewer frame.*COP* - Viewing Window: This is the window where the 3D object will project
onto. It is also represented as a canvas board on the applet. It is defined
by the following 3 variables:
- Viewing Distance(
): This is the distance from the*VD*to the view window.*COP* - Viewing Window Width(
): This is the width of the viewing window.*VW* - Viewing Window Height(
): This is the height of the viewing window.*VH*

- Viewing Distance(
- Viewing Direction (Z-basis) Vector(
): It is a vector to which the projection plane is orthogonal. It's magnitude is*VFZ*. Therefore, it is define as*VD*/100**(0, 0,**.*VD*/100) - X-basis Vector(
): It is a vector points to the rignt direction. It's magnitude is*VFY***(**. Therefore, it is define as*VW*/2)/100**((**.*VW*/2)/100, 0, 0) - Y-basis Vector(
): It is a vector points to the up direction. It's magnitude is*VFY***(VH/2)/100**. Therefore, it is define as**(0, (**.*VH*/2)/100, 0)

From the above information, we can caculate two matrix that can transform the the world coordinator system to the viewer frame coordinator system and vice versa. These two matrix needs to be calculated every time the photograph device moves.

- matrix
: The matrix that transform viewer frame coordinator to world coordinator. It equals*F2W_Matrix*

(VFX[0], VFX[1], VFX[2], VFX[3])

(VFY[0], VFY[1], VFY[2], VFY[3])

(VFZ[0], VFZ[1], VFZ[2], VFZ[3])

(COP[0], COP[1], COP[2], COP[3]) - matrix
: The matrix that transform world frame coordinator to viewer frame coordinator. It equals the inverse of*W2F_Matrix**F2W_Matrix*

- matrix
: The matrix that transform viewer frame coordinator to screen coordinator. It equals:*F2S_Matrix*

((VW/2)/100, 0, 0, 0)

(0, -(VH/2)/100, 0 0)

(0, 0, 1, 0)

(VW/2, VH/2, 0, 1) - matrix
: The matrix that transform screen coordinator to viewer frame coordinator. It is the inverse of*S2F_Matrix*. It equals:*F2S_Matrix*

(200/VW, 0, 0, 0)

(0, -200/VH, 0, 0)

(0, 0, 1, 0)

(-100, 100, 0, 1)

- Hither (or near) clipping plane: This is a plane orthogonal to the viewing
direction and in front of eye. Objects closer than the plane will be clipped out.
The distance from
to this plane is denoted as*COP*.*F* - Yon (or far) clipping plane: This is a plane orthogonal to the viewing
direction and in front of eye. Objects further than the plane will be clipped out.
The distance from
to this plane is denoted as*COP*.*B*

The method we are using for clipping is a generalization of *Liang Barsky*
algorithm. First, we consider how planes are represented. We know that each plane
divide the 3D space to two halfsapce. Each halfsapce can be defined by two
items: one point that lies on the plane, and an normal vector that orthogonal
to the plane and points to the halfspace side. If a point that is not within
the canonical view volume, in another words, if a point does not belong to
one of those 6 halfspace, it is clipped out of the volume.

What are those 6 clipping plane (halfspace)? As what we defined for the view window and the viewer frame 3 unit vectors in sector IV, we can find out those 6 clipping halfspace representation are:

- Hither Plane:

reference point (): (0, 0, F)*RP[0]*

normal vector (): (0, 0, 1)*NV[0])* - Yon Plane:

reference point (): (0, 0, B)*RP[1]*

normal vector (): (0, 0, -1)*NV[1]* - Top Plane:

reference point (): (0, 0, 0)*RP[2]*

normal vector (): (0, -1, 1)*NV[2]* - Bottom Plane:

reference point (): (0, 0, 0)*RP[3]*

normal vector (): (0, 1, 1)*NV[3]* - Left Plane:

reference point (): (0, 0, 0)*RP[4]*

normal vector (): (1, 0, 1)*NV[4]* - Right Plane:

reference point (): (0, 0, 0)*RP[5]*

normal vector (): (-1, 0, 1)*NV[5]*

Consider two vectors V0 = P0 - RP, and V1 = P1 - RP. S crosses the clipping
plane only if V0 and V1 points to different direction relative to NV. In other
words, this is ture if the dot products s0 = V0**.**NV and s1 = V1**.**NV
have opposite signs. (The negative sign cooresponds to the point that is to be
clipped.) In particular, the fraction value at which the segment intersects the
plane is **f = |s0| / (|s0| + |s1|)**. And the intersection point P can be
computed as **P = P0 + f * (P1 - P0)**.

Take a look of the source code for clipping detail implementation.

Since we do not want Z to be 0, we will need to do the clipping first then do the perspective transformation. The clipping procedure will clip out all the point outside the viewing volume including the points with Z = 0.

First, we compute each face's outpointing normal vector. This can be done by
cross producting two vectors on the face. For exapme, V0 and V1 are two vectors
on the face, then the normal vector NV of the face is **NV = V0*V1**.

After computing the normal vectors, we observe that if the normal vector is
directing away from the view point, then the face is the hidden face and will
not be shown on view window. The view point is at the origin which is eye
(** COP**). Again, we can use the dot product to determine this
situation. Consider a face with normal vector NV and a referenc point RP (this
could be any arbitray point on the face), a reference vector RV is define by

For wireframe object, the hidden line determination is done in the similar way. If a line does not belong to any of the visible faces, then this line is hidden.

Take a look of the source code for detail implementation.

Therefore, we can define the variable ** brightness** as following:

For the reason that
we do not want to lost the object's true color even when it's very dark, we
recalulate the color in the following way:

(**R/2 + R/2 * brightness, G/2 + G/2 * brightness, B/2 + B/2 * brightness**)

: It an object of Vertex class, which is a subclass of Vector. It contains following items:*points*: a flag that indicates whether this point is outside of the view volumn or not.*OutOfRange*: a 4-element array for vector representation.*element*

: It is an object of Line class, which contains following items:*lines*: a flag that indicates whether this line is totally outside of the view volumn or not.*OutOfRange*: the starting point index(starting from 0)*start*: the ending point index(starting from 0)*end*

: It is an object of Face class, which contains following items:*faces*: a flag that indicates whether this face is visible or invisible.*behind*: an array of all the lines enclose this face. Lines are ordered clockwise. Each line is indicated by it's line index (starting from 1). If a line value is negative, it means, to make this line clockwise, the line's start and end point needs to be reversed.*line*: total number of lines enclosing this face.*numLines*: for surface model with colored face, this variable indicates the color of the face. for surface model with light source, this variable indicates that the color of the face that will shown on the screen according to it's brightness.*color*

: for surface model with light source, this variable indicates the color of this object.*color*

- Transform the object in 3D (world coordinator frame) space according to it's movement (translation, rotation, spinning).
- Transform the object from world coordinator frame to viewer frame.
- Determine hidden surface.
- Perform object clipping according to the canonical view volumn.
- Perform perspective transfromation, map the 3D object to view window.
- Transform the view frame coordinator to screen coordinator system.
- Paint the object onto screen (canvas).

**Introduction to Computer Graphics**by*James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Richard L. Phillips.***Computer Graphics Class Notes**by*Dave Mount*, Universeity of Maryland at College Park.