Sick of hearing people name off algorithms that you should use, yet you have no clue what they are, much less how to implement them? Sick of people who won't take the time to explain algorithms and instead reference you to a book that you can't afford? Well, here's some info to help you out, for both the beginner and the advanced.

I've tried to make this page include enough information for the newbie to comprehend, yet enough technical meat and potatoes content for the graphics veteran to appreciate, while trying to keep it fairly compact. Please drop me an e-mail if something needs more clarity or depth, or if you have some juicy morsels that would fit in here, or if you've heard of some algorithm that isn't here, so I may look it up.

Source code is in the Gigo programming language, a language that I am designing, and will soon have a page describing it and its incredibly awesome features. :) As for the actual code, as in most tutorials, they are not massively optimized, but are included for example implementation purposes.

---Under Construction---

Yeah, I know it's kinda sparse now, but info's on the way. Plus, I'm putting together animated .GIFs to describe the algo's better, and will put up way more source code.

Last Updated: 11/20/98 (!!!)

Note: Please do not link to this page as this webspace is constantly being updated and moved. Instead, please point all links to http://geocities.datacellar.net/ResearchTriangle/Lab/1767, my homepage.

Mathematical Basics:

Graphics Algorithms:

Sorting Algorithms:

Compression Algorithms:


Mathematical Basics


Cross Product

The cross product, a three-dimensional vector operation, is defined as the determinant of a 3x3 matrix with the x, y, and z unit vectors in the top row, the first vector in the second row, and the second vector in the third row. This works out to be (y1*z2-z1*y2, z1*x2-x1*z2, x1*y2-y1*x2). The resultant vector's length is the dot product of the two vectors, and is in the direction of the surface normal of the plane formed by the two vectors, following the right-hand rule. Graphically, if you hold out your right hand, palm up, the first vector corresponds to your thumb (pointing to the right), the second vector corresponds to your index finger (pointing away from you), the plane formed is the surface of your palm (not the back of your hand), and your middle finger (pointing upwards from your palm) corresponds to the surface normal of your palm, as defined by your thumb and index finger, in that order. Easy as 1, 2, 3. :)

Note that order does matter, and if the first vector (thumb) pointed away from you, and the second vector (index finger) pointed to your right, your resultant vector (middle finger) points downward, in the opposite direction of the other configuration. Also, when surface normals are discussed, they are always normalized. Note that if the vectors being cross multiplied are already normalized and are perpendicular, the resultant vector should (mathematically) be automatically normalized, but due to round-off errors and such, it's good to normalize it anyway.


Dot Product

The dot product operation is defined as the sum of the products of the pairs of vector components ((x1,y1) dot (x2,y2) = x1*x2+y1*y2; (x1,y1,z1) dot (x2,y2,z2) = x1*x2 + y1*y2 + z1*z2; etc.) Conveniently, this also works out to be the product of the magnitudes (which is the vector's length, or the square root of the sum of the squares of the vector components) of the vectors and the cosine of the angle between them. If 2 vectors are normalized (which means that they have been scaled so their magnitude is 1), then the dot product returns the (very useful) cosine of the angle between the two vectors. Thus, if two 2-D vectors are perpendicular, x1*x2+y1*y2 will equal zero, and if they're parallel, it will equal one.


Interpolation

Many types of interpolation are implemented in many different algorithms, and interpolation methods can usually be changed within an algorithm to achieve a quality/performace tradeoff. Some types are listed here, in order of complexity:

Linear Interpolation

This most commonly used, quickest, and simplest form of interpolation is used to solve many various needs of algorithms. Based on a straight line relationship between two points, values, vectors, colors, etc., it can be used to find a value at a specified weight between the two control values, or can find a weight that gives a specified value, plus it only requires two control values, whereas more complex methods require 3 or more.
The root form, given points P1 and P2, is P=P1+u*(P2-P1). u is the weight factor and is defined so that P=P1 when u=0, and P=P2 when u=1. In a setting like bilinear interpolation, u is given and P (the weighted color value) may be found by the previous equation.
However, cases like scan conversion do not use a 0..1 span to define its interpolation range, and instead define the range from y1 to y2. Therefore, y/(y2-y1), where y is the current scan line, ranging from 0 to (y2-y1), will replace u, giving x=x1+(y/(y2-y1))(x2-x1). As you can see, m=(x2-x1)/(y2-y1) is the constant slope for the current linear relationship, and as y increments or decrements, x can simply be added or subtracted by m to achieve its subsequent value. However, this requires floating point or adequate fixed point representation, because m is a ratio. For an integer implementation of step-wise linear intepolation, see Bresenham's line algorithm.

Spline Interpolation


Rotation


Hexadecimal and Binary Notation

Our standard 10-digit (0-9) numbering system, based on the number of fingers we're endowed with, is based on certain rules. These same rules apply to numbering systems that have differing numbers of digits. The main numbering notations used when dealing with computers (binary's 2-digit and hexadecimal's 16-digit) are based on the bit, which is a unit of storage in a computer, usually a transistor or capacitor that holds a voltage at a high or low value. Thus, there are 2 states that this bit can have, and they're called 0 and 1, the only 2 digits in the binary numbering system. Binary numbers are denoted by a percent sign (%10010).

A byte of data holds 8 bits. Since each bit can have one of two values, a byte can hold 2*2*2*2*2*2*2*2 (or 2^8) = 256 distinct states, numbered 0-255. 256 individual digits is a little too much to define, so instead of having a single digit describing a byte, two digits of 0-15 each, represent a byte. Each has one of 16 digits (hence the name: hexa=6 deci=10, so hexadeci=16), and 16*16=256 combinations with these two digits, nicely representing a byte. Now, the question is: what do we use for digits for the numbers 10-16? Easy cop out. The 16 hexadecimal digits are dubbed 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F, and hex numbers are denoted by a dollar sign ($F13A).

Now that we know a little bit about where these numbering systems came from, let's see how to use them. In base 10 (regular decimal), let's look at what happens in the simple addition 5+6. Eleven is greater than our basis (10), so we have 1 basis and 1 left over, or 11. Binary and hex work exactly the same way, except that our carry-over factor is 2 or 16 instead of 10.

Dec:  001, 010, 100 = 1,10,100
Bin: %001,%010,%100 = 1,2,4
Hex: $001,$010,$100 = 1,16,256

Dec        Bin            Hex
  5(=5)    %010(=0+2+0=2) $05(=0+5)
 +6(=6)    %011(=0+2+1=3) $0E(=0+14)
 --        ----           ---
 11(=10+1) %101(=4+0+1=5) $13(=16+3=19)
Hexadecimal is probably a little easier to corellate to decimal than binary, and there are a few shortcuts that can be taken from decimal. For instance, adding 9 to a decimal number increases the 10's digit by one and decreases the 1's digit by one (55+9=64). This is because 9 is one less than the basis (10). In hex, $F(=15) works the same way, because it's 1 less than 16. So, $28+$F=$37 etc.

Multiplying a number by its basis shifts the number one slot to the left. 35*10=350 exactly the same way as %1101*%10=%11010 (13*2=26), and $FE*$10=$FE0 (253*16=4048). As you can see, when dealing with bytes where you have to multiply or divide by 256 ($100), hex makes this very easy because you only need to shift numbers left or right instead of pulling out the calculator. Given a list that contains 256-byte long entries, and the list starts at address $0000, say you need to extract the 15th byte of the 97th entry. 97*256+15 is a little tough in decimal, but in hex, it's simply $61*$100+$F=$610F. This is a 4-digit hex number, which means that it covers 2 bytes. Simply store the entry ($61) in the higher byte and the byte number ($0F) in the lower byte, and you get a free multiply by 256. To correlate to decimal, if you have 100-byte entries: the 15th byte of the 97th entry would be easily calculated at byte 9715. To convert 9715 into bytes, though, would require a divide/remainder by 256 (9715/256=37 rem 243), so the high byte gets 37 and the low byte gets 243 (37*256+243=9715), a much more roundabout method than using hex in the first place.


Graphics Algorithms


A-Buffer Algorithm

This is an extension to the
z-buffer algorithm that incorporates antialiasing by supersampling. Each pixel has a respective a-buffer, which is a small (usually 8x8) z-buffered pixel array. Each polygon rendered is rendered at this sub-pixel resolution, and a bitmask is kept of which polygons occupy which subpixels. These bitmasks then provide the weight for each polygon's contribution to the final color of the pixel. 3-D Studio uses this method to anti-alias its polygonal renderings.


Backface Culling Algorithm

This algorithm assumes that all polygons are one-sided, therefore polygons which are not facing the camera can be skipped over in the rendering process to save time. There are 2 methods of backface culling that I am aware of:

Pre-Scan Conversion

Once the polygon vertices are rotated and ready to be projected onto the screen, the surface normal is checked for direction. Say, for instance, that the renderer puts the x-axis along the bottom of the screen going right, the y-axis goes into the screen, and the z-axis goes up along the left edge of the screen, then if any surface normal has a positive y component, then it is not facing the camera and need not be projected. To find the surface normal, it may be pre-calculated along with the polygon model, then rotated along with the model during transformation, or the normal may be calculated from the cross product of two of the polygon edges.

During Scan Conversion

If the transformation matrix already includes projection into screen coordinates, and all of the polygon edges are in a known (clockwise or counterclockwise) order, then backwards facing polygons will be in the opposite order on the screen. Thus, the scan converter can see if the left x-coordinate is to the right of the right x-coordinate (or, reduntantly, vice-versa), and skip the polygon.
This second method skips the costly transformation of polygon normals and the cross product calculation, but brings backfacing polygons all the way to the scan converter, which means that it has had camera projection and transformation applied to it. In many cases, though, this is a smaller cost than those inflicted in the first method. I first saw this approach in Stephen Judd's 3-D engines for the Commodore 64.


Bilinear Interpolation Algorithm

This is a technique to remove the infamous chunkies from texture maps (you know, those large squares of the texture that you can see when a polygon is zoomed in close). It can also be implemented to help any rectangular array of data from appearing like a rectangular array of discrete points. Given a set of coordinates in floating point, fixed point, or any other method that is more precise than the array grid, this function returns a value which is a simulated weighted average of the four surrounding points, without too much overhead.

As the name implies, linear interpolation occurs in two directions. The order (horiz. then vert., or vert. then horiz.) does not matter, but I'll use horizontal, followed by vertical interpolation. First, take the sub-grid, or fractional, portion of the x-coordinate and interpolate both the top set and the bottom set of corner values with this sub-grid value. Second, take the sub-grid portion of the y-coordinate and interpolate the intermediate values from the first interpolation to find the final value of the sub-grid point.

This method works very well with smooth, "real-world" type images. With images that have pixel-level detail, however, the orthogonal nature of this algorithm can sometimes be seen and distracting artefacts can develop when color lines are slightly rotated from the orthogonal basis. Using more sophisticated interpolation schemes between the four points can reduce this, but only at a much higher computational cost.

Sample Source Code:

func bilinear(x:float, y:float, bitmap(,):pixel):pixel
[ ;Integer coordinates of the sample point
  xint = (x):int
  yint = (y):int

  ;4 surrounding sample points
  topleft = bitmap(xint,yint)
  topright = bitmap(xint+1,yint)
  botleft = bitmap(xint,yint+1)
  botright = bitmap(xint+1,yint+1)

  ;Fractional portions of x and y
  fx = x - x1
  fy = y - y1

  ;Interpolate top and bottom edges
  top = topleft + fx*(topright - topleft)
  bot = botleft + fx*(botright - botleft)

  ;Interpolate vertically
  return top + fy*(bot - top)
]


Box Filtering Algorithm

This operation is performed on bitmaps for effects such as sharpening, blurring, and other effects which take into account not only single pixels, but the pixels which surround the sample pixel. A box filter is simply an array of weighting factors that are applied to an array of pixels to calculate a new pixel, usually located at the center of the array. Some examples are:

  Blur        Sharpen      Smear to the left
[ 1 2 1 ]   [ -1 -2 -1 ]    [ 0 0 1]
[ 2 4 2 ]   [ -2  4 -2 ]    [ 0 2 1]
[ 1 2 1 ]   [ -1 -2 -1 ]    [ 0 0 1]

Box filters may be of any size, but if the dimensions of the filters are an even number, the image will appear to shift in one direction because an even-dimensioned array will have no "center" component. To ensure that the filter does not introduce any unwanted change into the intensity of the image, all weighting factors in the filter should total to 1. This is easily achieved with floating point, but when integer weights are used, the resultant pixel value must be divided by the sum of the weights to normalize the intensity. Another common error involving the implementation of box filters is replacing calculated pixels back into the source image. Since box filters operate on surrounding pixels, changes made to the image while filtering will progress throughout the image, distorting it severely. A new image should be allocated, and the filtered values should be placed there, leaving the source image intact. Also note that a special case arises at the border of the image, as border pixels do not have neighboring pixels in certain directions.

Sample Source Code:

;'source' dimensions are in the range (0 to x,0 to y)
;'filter' dimensions are in the range (-x to x, -y to y)
;return value dimensions are the same as 'source'
func boxfilter(source(,):pixel, filter(,):float):dimension(source):pixel
[ ;Get the dimensions of the input arrays
  [srcx,srcy] = dimension(source)
  [boxx,boxy] = dimension(filter)

  ;Loop through the image
  for y,0,srcy,1
  [ for x,0,srcx,1
    [ ;Initialize our accumulators to zero
      pix:pixel = 0
      totweight:float = 0

      ;Clip our filter loop so we stay on the source image
      startx = -boxx
      starty = -boxy
      endx = boxx 
      endy = boxy
      while (startx+x)<0, inc startx
      while (starty+y)<0, inc starty
      while (endx+x)srcx, dec endx
      while (endy+y)srcy, dec endy

      ;Perform the filter operation
      for by,starty,endy,1
      [ for bx,startx,endx,1
        [ totweight = totweight + filter(bx,by)
          pix = pix + filter(bx,by)*source(x+bx,y+by)
        ]
      ]

      ;Scale the pixel back, then put it in the output array
      pix = pix/totweight
      retval(x+bx,y+by)=pix
    ]
  ]
]


Bresenham's Line Algorithm

Developed in a time where floating point math was still very much slower than integer math, this algorithm performs a step-wise linear interpolation using only integer math. This algorithm utilizes different cases for different orientations of a line. For horizonal lines, the segment is traversed in a horizontal direction, and vertical lines are traversed vertically. This ensures that for each step in the 'long' direction, either one or no step will be taken in the 'short' direction. Consider this traversal using standard stepwise linear interpolation. The slope value m = DeltaShort/DeltaLong. From the initial point, the 'long' coordinate is incremented, and m is added to the 'short' coordinate. The integer portion of the 'short' coordinate is used for the pixel coordinate. Thus, as the line is traversed, the value of the 'short' coordinate looks like: startval + DeltaShort/DeltaLong + DeltaShort/DeltaLong + ..., which is equivalent to: startval + (DeltaShort + DeltaShort + ...)/DeltaLong. Note also that the integer value of the 'short' coordinate will only change when the numerator of the accumulated slope passes a multiple of the denominator, or, when a*DeltaShort passes a multiple of DeltaLong. So, if we keep track of the numerator in a separate integer variable, we can check when it surpasses the denominator and add 1 to the 'short' coordinate, then subtract 1 from the fraction (equivalent to subtracting DeltaLong/DeltaLong) to perform a carry-like operation.

Sample source code:

dx = x2 - x1
dy = y2 - y1
adx = abs(dx)  ;We're comparing lengths, so we must take the
ady = abs(dy)  ;  absolute value to ensure correct decisions.
sx = sgn(dx)   ;The sign of the delta values tell us if we need to
sy = sgn(dy)   ;  add or subtract 1 to traverse the axes of the line.
x = x1
y = y1
plotpoint(x,y)  ;Plot the first point
if adx>ady
[ err = ady/2   ;Initialize numerator (see scan conversion)
  do
  [ x = x + sx        ;Move along the 'long' axis
    err = err + ady   ;Update the numerator
    if err>adx        ;Check to see if num>denom
    [ err = err - adx ;If so, subtract 1 from the fraction...
      y = y + sy      ;...and add 1 to the 'short' axis
    ]
    plotpoint(x,y)
  ] until x==x2
] else
[ err = adx/2  ;Initialize numerator (see scan conversion)
  do
  [ y = y + sy        ;Move along the 'long' axis
    err = err + adx   ;Update the numerator
    if err>ady        ;Check to see if num>denom
    [ err = err - ady ;If so, subtract 1 from the fraction...
      x = x + sx      ;...and add 1 to the 'short' axis
    ]
    plotpoint(x,y)
  ] until y==y2
]


Bresenham's Circle Algorithm


BSP Tree Algorithm


Chrome Mapping Algorithm


Concave Polygon Splitting Algorithm

Many graphical routines that work on polygons require them to be convex, so an algorithm is needed to split concave polygons into representative convex components. This process needs only to be performed once during the creation of the model, and the computation time taken to ensure an optimized set of polygons will surely pay off in rendering speed.

In viewing a polygon from a given view, with vertices being ordered, the polygon will tend to flow either clockwise or counterclockwise as the edges are traversed. For a computer to realize this, we take the sum of all of the cross products of each pair of adjacent edges. This vector sum will point in the direction of the surface normal of the 'convex surface.' (For polygons whose vectors are 2-d, only the z-component of the resultant vector is significant, and may be stored as a single number, instead of a vector.) If a polygon has a concave portion, two edges in this concavity will have a cross product that is in the opposite direction of the convex surface, giving the surface normal of the 'concave surface.' The cross products of all edge pairs are then analyzed. A 'convex' vertex is selected (now known as the 'starting vertex'), and the edge list is traversed in one direction until a 'concave' vertex is reached. A temporary edge is constructed between the 'concave' vertex and the starting vertex. If the resultant polygon containing the traversed edges and the new edge is concave (using cross product testing), or if the new edge intersects an existing edge anywhere in the polygon, the starting vertex is dropped from the traversal list, and the polygon is recursively tested for convexity. If the resultant polygon is convex, the next vertex outside the traversal list next to the starting vertex is added to the list and becomes the new starting vertex, and the polygon is recursively tested for convexity. The newly formed polygon is now in its largest convex form containing the original starting vertex. This polygon is output from the algorithm and the remaining vertices are recursively tested in this manner.

This algorithm should produce an good set of convex polygons, but does not gaurantee the mathematically perfectly minimal polygon count for all cases. In some freakish cases, and depending on where the recursion is started from, this algorithm completely triangulates a concave polygon that could be specified with only a few convex polygons. To nearly guarantee optimal results, an implementation of this algorithm could perform the recursion separately starting in each different span of convex vertices and output the best result, but for most simple concave polygons, this algorithm will suffice.

(discovered all of this by myself! then discovered that others had too... :) )


Environment Mapping Algorithm

Environment mapping is an extension of Phong shading that simulates a reflective surface. Instead of using the Phong interpolated normal vector to calculate lighting, this ray is used to reference a prerendered bitmap that describes what the environment looks like from the viewpoint of the object. This mapping can be performed in two ways: spherical or cube mapping.

Spherical mapping converts the normal vector into spherical coordinates theta and phi, which are referenced into a circular, 360 degree fisheye render of the environment. Many ray tracers are capable of outputting a suitable spherical render from a given position in a scene.

Cube mapping separates the environment space of the vector into six sections, representing faces of a cube. Six renders of the scene, in the six orthogonal directions from the object's position are prerendered and mapped onto this "cube." The ratios of the components of the normal vector are examined to determine which cube face the normal is facing. The map which is being faced is then intersected with the ray to determine the final pixel value.

Sample Source Code:

;Environment mapping using cube mapping
;'map' is passed as an array of six 2-d bitmaps
func env_cube(normal:vector, map(,,):pixel):pixel
[ ;We're comparing component magnitudes, so we must take the absolute
  ;value of the components before comparing
  mag:vector = abs(normal)

  ;Whichever component is greatest tells us which cube face the normal
  ;is pointing towards.(0 is x, 1 is y, 2 is z, add 3 for negative directions.)
  ;Then the other 2 components are scaled to get the bitmap coordinates.
  if mag.x  mag.y
    if mag.x  mag.z
    [ if mag.x  0
        face = 0
      else
        face = 3
      return map(face, normal.y/normal.x, normal.z/normal.x)
    ]
    else
    [ if mag.z  0
        face = 2
      else
        face = 5
      return map(face, normal.x/normal.z, normal.y/normal.z)
    ]
  else
    if mag.y  mag.z
    [ if mag.y  0
        face = 1
      else
        face = 4
      return map(face, normal.x/normal.y, normal.z/normal.y)
    ]
    else
    [ if mag.z  0
        face = 2
      else
        face = 5
      return map(face, normal.x/normal.z, normal.y/normal.z)
    ]
]


Fractal Terrain Algorithm

In many instances, it is much more convenient to generate an arbitrary landscape procedurally than to store a huge array which explicitly describes the terrain shape. This algorithm can also generate the fine detail of a heightfield, given a sparse set of control points which defines its general shape.

Sample source code:

;Use 1-byte int parameters so that (256,256) will
;  wrap around to (0,0), causing the landscape to
;  be continuous across its edges.
array(0 to 255:int1, 0 to 255:int1)

generate(initval):sub
[ array = -1            ;Initialize the entire array to -1
  array(0,0) = initval  ;Seed a single point in the array
  sample(0,0,256)
]

sample(x1,y1,size):sub
[ halfsize = size/2
  xh = x1 + halfsize
  yh = y1 + halfsize
  x2 = x1 + size
  y2 = y1 + size
  if array(xh,y1)==-1, array(xh,y1) = (array(x1,y1) + array(x2,y1))/2 + random*size
  if array(xh,y2)==-1, array(xh,y2) = (array(x1,y2) + array(x2,y2))/2 + random*size
  if array(x1,yh)==-1, array(x1,yh) = (array(x1,y1) + array(x1,y2))/2 + random*size
  if array(x2,yh)==-1, array(x2,yh) = (array(x2,y1) + array(x2,y2))/2 + random*size
  if array(xh,yh)==-1, array(xh,yh) = (array(xh,y1) + array(xh,y2) + ..
                                       array(x1,yh) + array(x2,yh))/4 + random*size
  if halfsize1
  [ sample(x1,y1,halfsize)
    sample(xh,y1,halfsize)
    sample(x1,yh,halfsize)
    sample(xh,yh,halfsize)
  ]
]


Gouraud Shading Algorithm

This is the next level of complexity and image improvement above Lambert shading. Instead of the lighting parameter being calculated once per polygon, it is calculated per vertex. To achieve this, the surface normal of each vertex must be computed, which can be found by averaging the surface normals of all polygons that share this vertex. (These can be pre-calculated if the model does not change form.) During scan conversion, the color values for each vertex, computed with the lighting parameter, are interpolated across the polygon.

The Gouraud shading method effectively smooths out the abrubt color changes on polygon edges found in flat- and Lambert-shaded polygons. However, since the lighting is calculated so sparsely, sharp or distinct highlights usually fall between the vertices and, thus, are not seen, rendering a rather dull-looking (albeit smooth) surface color.

For a sharper-looking shading model, see Phong shading.


Lambert Shading Algorithm

This is sometimes mistaken for the flat shading algorithm, which assigns a single, constant color (given as part of the model data) to an entire polygon. Lambert shading varies the intensity of the given color using the lighting parameter, which is the dot product of the light direction and the surface normal of the polygon. This new color is shaded over the entire polygon, giving a flat look, but requiring no per-line or per-pixel interpolation during scan-conversion.

The advantage of this method over flat shading is that the entire model can be assigned a single color, and the polygons will all have different shading, depending on their orientation to the light source. If the light source is relative to the camera, or moving independently, then the shading will change as the light source moves, giving a much better three-dimensional presence to the model.

For a smoother shading model, see Gouraud shading.


Marching Cubes Algorithm

This algorithm generates a contour plot of an equation by discretely sampling the equation in a grid-like fashion


Mip-Mapping Algorithm

When a bitmap is scaled down to a smaller size using only interpolation, the subset of pixels chosen to display may not correctly portray the original image, and different sets of pixels may be extracted for different frames, causing flickering. Mip-mapping causes the texture mapper to read from different, smaller texture maps depending on the final screen texture size. Thus, textures are never shrunk, only expanded, usually with smoothing procedures like bilinear interpolation, to eliminate aliasing and flicker at all stanges of expansion. Creating the shrunken textures happens once during initialization, so carefully tuned, high-end, anti-aliasing shrinking functions can be used without degradation to rendering performance. The easiest shrinking function is to average a 2x2 pixel block together into a single pixel of the shrunken texture. Trilinear interpolation is an extension of mip-mapping and bilinear interpolation, and smooths transitions between the different levels of the mip-maps. Given that a map is stored for when the distance from the camera to the polygon is A, and that a smaller map is used for the next distance of B, and that the actual distance from the camera to the point in question is C, where A<C<B, trilinear interpolation will take the bilinearly-interpolated sample point from both A and B, and linearly interpolate these two pixel values using distance as the interpolator. This gives an incredibly smooth transition between small and large texture sizes.


Painter's Algorithm

To achieve a believable result when displaying filled 3-D polygons onto a video screen, polygons closer to the camera must obscure polygons that are farther away. When a renderer is said to use the painter's algorithm, it means that polygons are rendered to the screen in a pre-sorted far to near order, so that distant polygons are "painted over" by closer polygons. Sorting these polygons for proper rendering order usually depends heavily on how the screen is constructed. For example, if polygons are sorted by their midpoints, and a large wall is to the left of a camera, with a pillar beside the wall to the right, the midpoint of the wall could be closer to the camera than the midpoint of the pillar. Obviously, in this case, the pillar needs to occlude the wall, but if the midpoints are sorted to determine rendering order, the pillar will be hidden. A BSP tree is one way to generate world model data to ensure proper rendering order.

See also Z-buffering, S-buffering, Portal rendering.


Phong Shading Algorithm

This is the next level of complexity and image improvement above Gouraud shading. As with the Gouraud method, surface normals at every vertex are computed. However, in Phong shading, these normals are interpolated across the polygon during scan conversion and a lighting parameter is calculated at every pixel, which affects its color. This interpolation mathematically gives the flat polygon a rounded reflection across its surface. The advantage of this method over Gouraud's is that highlights from light sources are always visible and clearly defined (given that the camera can see the polygon that his highlighted, and that the highlight is larger than a single pixel).

See also environment mapping.


Portal Rendering Algorithm

Like S-buffering, portal rendering can render polygonal scenes with zero overdraw. This is achieved by careful construction of models, which usually represent walls of indoor static scenes. The underlying premise of portal rendering is that if the camera is inside a convex polygon mesh, no overdraw will occur. Therefore, a model is separated into convex polyhedra (known as 'sectors'). To do this, certain polygons must be created to separate a concave model into convex sub-models. These new polygons are known as 'portals.' During rendering, only polygons within the current sector are transformed, backface culled, and rendered to the screen. If one of these polygons happens to be a portal, the polygons in the sector visible through the portal are rendered and clipped to the portal polygon.

Aside from utilizing zero overdraw, this rendering method also gives a speed increase in the fact that only visible sectors are ever transformed and rendered. Unique features can also be relatively easily added to the portal method, like transparent bitmaps on portal polygons, and transforming space coordinates through portals which would allow wormholes, security cameras, mirrors, etc. Movable objects can also be added on top of the portal engine, which will most likely cause some overdraw, to make scenes more interactive and easier to model.

Crystal Space, a free portal engine development project, is available here.

Pseudo-code:

sub DoTheScene(currentsector:object sector, campos:position, camrot:matrix)
[ ;We take position and rotation matrices and make a transformation matrix
  xform:matrix = math.MakeTransformationMatrix(campos,camrot)

  DrawPortal(currentsector, xform, ScreenClipPoly())
]

sub DrawPortal(currentsector:object sector, transformation:matrix, clip():coord)
[ poly:object polygon

  with currentsector.polylist
  [ poly = TransformPoly(listval, transformation)
    if FacingUs(poly)
    [ ZClip(poly)
      if poly.stillthere
      [ if poly.isportal
        [ DrawPortal(poly.nextsector, ..
                 TransformMatrix(transformation,poly.nextsectormatrix), ..
                 poly.coordlist)
        ]
        else
        [ DrawToScreen(poly)
        ]
      ]
    ]
  ]
]


Ray Casting Algorithm

Ray casting, in its most general form, is a subset of ray tracing, where only visibility intersections are calculated. Applications of this algorithm vary greatly, however. One widely used use of ray casting is for voxel-type data. A ray is cast from the camera in a certain direction, and each integer boundary crossed is checked for opaque data, until a cube is hit, or until the maximum visibility range is reached. Wolfenstein 3D is probably the most well-known implementation of voxel ray casting, where a 2-d map is constructed, and rays are cast only once every screen column.

Sample Source Code:

type coord:[x:float, y:float]

;Wolfenstein-type ray casting
;vecdir should be normalized
func raycast(vecstart:coord, vecdir:coord, map(,):pixel):pixel
[ ;Initialize our accumulator
  vec:coord = vecstart

  ;Get map dimensions
  [mapx,mapy] = dimension(map)

  ;Keep track of the integer portions so we know when we hit a face
  ivec:coord = newivec:coord = (vec):[:int,:int]

  ;Get the incremental direction of the x and y components
  s:coord = sgn(vecdir)

  ;Just keep on looping
  loop
  [ vec = vec + vecdir
    ivec = newivec
    newivec = abs(vec)

    if (vec.x<0) ? (vec.xmapx) ? (vec.y<0) ? (vec.ymapy), return NO_DATA

    if iver.x != newiver.x
      if iver.y != newiver.y
      [ ]
      else
       


Ray Tracing Algorithm

While most rendering packages take polygons, circles, lines, etc., and convert their space coordinates into the screen, ray tracing takes the opposite approach. A 3-D scene is modeled, and can contain analytically defined objects such as spheres, cones, and general equations. A camera is defined in the scene, and field of view and screen resolution are defined. The renderer proceeds to construct a mathematical parametric ray from the camera, through an individual pixel in the field of view, and tests this ray for intersections with the scene objects. Because both the objects and the ray are mathematically defined, the ray can be reflected off of or refracted through object surfaces while carrying shading properties, to generate nearly photo-realistic effects. Anti-aliasing can be achieved by simply tracing more rays through a single pixel at different sub-pixel locations and averaging the results. Focal blur, fog, shadows, and many other effects can be conveniently added, as they can be mathematically defined. Procedural textures may also be applied, because the x,y,z coordinates of a ray intersection are easily attainable and can be parameters to a texture function; for example, a procedural 3-D checkerboard function: if ((x+y+z):int & 1), return black, else return white.

This approach, while generating stunning images and effects, is a brute force method of rendering, and thus is very computationally expensive. While polygon and other such screen-filling renderers can easily be written to be real-time on an average computer of today, complex or high-resolution ray traced scenes can take up to a few hours, even on a fast PowerPC. Simple, low-resolution scenes may only take less than 10 seconds to render on a fast home computer.

For more information, visit the official POV-Ray web site. POV-Ray is a freeware ray tracing package with a large array of very sophisticated effects, with freely available source code. The web site also has links to other general ray tracing information and algorithms.


S-Buffer Algorithm

Video memory is usually considerably slower than other memory, in systems without a unified memory architecture, because the monitor driver circuitry is constantly requiring information from this pool to update the display. Thus, to obtain optimal graphical performance, read/write accesses to video memory must be minimized. The painter's algorithm and Z-buffering both experience overdraw, where a pixel may be written to more than once. S-buffering, which is short for "span buffering", uses the scan conversion process to set up an array of scan line spans in which visibility testing occurs, then renders these spans to the video memory.

The nature of these spans depends heavily on the type of scan converting being used. For flat or Lambert shading, a span need only contain the x coordinates and depth values of the start and end points of the span, as well as the color. For more sophisticated rendering methods, the texture coordinates, surface normals, etc. of the endpoints of the span need to be stored. Once the span has been defined, it is depth-compared with the other spans in the scan line. If the new span is closer and overlaps another span, the farther span is clipped and the new span is inserted. If the new span is completely occluded by previous spans, no insertion takes place. Spans that intersect in both x and depth values must be interpolated to determine their new start and end values. Many types of interactions between spans are possible, and the span insertion routine must make sure to keep spans from overlapping, while keeping their endpoint data intact. Once all spans in the scene have been inserted into the S-buffer, rendering to the screen is a very streamlined process. All spans are ordered in left-to-right in each scan line, so no new screen coordinates need to be calculated, and output to the video ram can occur at the maximum bit width that the processor can support.

The advantages of this method shine in high-resolution applications, where other methods overdraw huge numbers of pixels, and where Z-buffering must perform testing on a per-pixel basis. With S-buffering, texture and lighting calculations for pixels that would be overdrawn are never calculated. This method also usually consumes much less memory than a Z-buffer, and, because of its procedural nature, can be made to support transparency, antialiasing, and other such high-end techniques.

See also Portal rendering.


Scan Conversion Algorithm

Polygon rendering consists of translating vertex data to horizontal scan lines on a video screen, hence the name scan conversion. The fundamental method for accomplishing this task is interpolation. Usually, the topmost vertex is found, and this vertex, along with its neighboring two vertices, define the current left and right edges of the polygon. Starting at this uppermost pixel, and descending downward, the left and right x-coordinates of the polygon for the current scanline are found by interpolating between the corners' x-coordinates, and this span is rendered. When a vertex is reached, a new left or right edge is defined, and the interpolation is reset. The polygon is completed when both edges reach the same vertex, or when the bottom-most y-coordinate is reached (whichever is easier to implement in the current setting).

Shading and mapping routines, which actually decide what to fill into the individual pixels of the polygon, usually require data on where in the polygon the current pixel lies. Routines like texture mapping and Gouraud shading pass data about the vertices to the scan converter, which must be interpolated across the entire polygon. This happens on a per-line basis first, where the y-coordinate is used to interpolate the vertices' parameters across the line edges, and then on a per-pixel basis, where the x-coordinate is used to interpolated the edges' parameters across the scan line.

Consider the right edge of a square polygon travelling from (0,0) to (10,1), scanning downwards and using a stepwise linear interpolation algorithm. m, in this case, is (10-0)/(1-0), or 10. Therefore, the first scan line will only contain a point at (0,0), and the second line will contain a segment from (0,1) to (10,1). The square, which is slightly rotated clockwise from its orthogonal origin, has a single point at its apex, and a long flat step on the scan line below.


Scanline Rendering

Traditional polygon rendering translates polygon coordinates to screen coordinates, then fills those screen coordinates with color data. Scanline rendering does this in reverse order. The scanlines are looped through, and each one is intersected with the visible set of polygons. From these intersections, the start and endpoints of every polygon that affects that certain scanline are known.


Sutherland-Hodgman Clipping Algorithm

This pre-scan conversion algorithm clips a polygon against a single clip parameter and outputs a new polygon. It is usually called four times to clip against the entire view area, or six times for the view area plus near/far clipping. While shown in 2-D here, extension to any dimension is utilized by merely changing the definition of the clipping surface.

Once the clipping surface has been defined (a simple compare-function for a rectangular viewing area), each line segment in the polygon edge list is compared, in order, to the surface. Segments that are fully contained within the "inside" region are added to the new polygon's edge list, while segments that are fully "outside" are skipped. Segments that straddle the clipping boundary require the clipper to calculate the intersection of the segment and the boundary, and output the "inside" point and the boundary vertex.


Texture Mapping Algorithm

Applying a bitmap image to a polygon involves passing 2-dimensional texture coordinates to the scan converter. These coordinates are interpolated across the entire polygon, and at each pixel the texture map is read by the interpolated coordinates to retrieve that pixel's color.

Sample source code:

;Define what a 'coord' is
type coord:[x:float, y:float]

;Pass polygon and texture 2-d coordinates, and the texture map
sub texmap(poly():coord, tex():coord, bitmap(,):pixel)
[
  ;:Declare our object vars.  We will only create them   :
   :once, then reset their values to reduce the overhead :
   :of always creating/destroying objects.               :;
  [leftedge, rightedge]:static object scanconvert
  [lefttex, righttex]:static object interpolatecoord

  ;Define our line drawing subroutine
  sub doline(scanline:int, x1:int, x2:int, texleft:coord, texright:coord):inline
  [ linetex:static object interpolatecoord
    linetex.reset(texleft,texright,x2-x1)
    for x:int,x1,x2,1
    [ putpixel(x,scanline,bitmap(linetex.current))
      linetex.step
    ]
  ]
    
  ;FindTopVertex will return the array index of the top-most vertex
  top = FindTopVertex(poly())
  left = wrap(top-1, dimension(poly()))
  right = wrap(top+1, dimension(poly()))
  startline = poly(top).y
  endline = poly(FindBotVertex(poly())).y

  ;Set up our object vars
  leftedge.reset (poly(top),poly(left))
  rightedge.reset(poly(top),poly(right))
  lefttex.reset (tex(top),tex(left), poly(left).y  - poly(top).y)
  righttex.reset(tex(top),tex(right),poly(right).y - poly(top).y)

  for y:int,startline,endline,1
  [ ;Draw a texture mapped scanline of the poly
    doline(y, leftedge.x, rightedge.x, lefttex.current, righttex.current)

    ;Increment our interpolators
    [leftedge, rightedge, lefttex, righttex].step

    ;See if we've finished the left edge
    while y==poly(left).y
    [ oldleft = left
      left = wrap(left-1, dimension(poly()))
      leftedge.reset(poly(oldleft),poly(left))
      lefttex.reset(tex(oldleft),tex(left),poly(left).y-poly(oldleft).y)
    ]

    ;See if we've finished the right edge
    while y==poly(right).y
    [ oldright = right
      right = wrap(right+1, dimension(poly()))
      rightedge.reset(poly(oldright),poly(right))
      righttex.reset(tex(oldright),tex(right),poly(right).y-poly(oldright).y)
    ]
  ]
]

;Step-wise linear interpolator
object interpolatecoord
[ default [vartype = public]

  ;Declare object variables
  [current, delta]:coord

  sub reset(start:coord, end:coord, numsteps:int)
  [ current = start
    delta = (end-start)/numsteps
  ]

  func step:coord
  [ return (current = current + delta)
  ]
]

See also bilinear interpolation, mip-mapping, trilinear interpolation.


Wu Anti-aliasing Algorithm


Z-Buffering Algorithm

This algorithm can render polygons in any order, while still retaining proper visibility of overlapping polygons, even when the polygons penetrate each other. This is accomplished by doing a visibility check at each pixel: If a point in a polygon is closer than the previously plotted point at that pixel, the new pixel is plotted; otherwise, the previous pixel remains. Naturally, the distance data needs to be stored in an array (called the z-buffer) of the same size as the view screen. This array is initialized to the farthest allowable distance. The distances from the camera to the vertices are passed to the scan converter, which are interpolated across the entire polygon, checking the interpolated value against the value in the z-buffer. If the pixel is to be plotted, the z-buffer entry must also be updated with the new distance value.

One optimization that can save both memory and time is not to do the compare at every pixel, instead at every 4 pixels, or once in every 4x4 block of pixels, or whatever value is visually acceptable. This dramatically reduces the size of the z-buffer and usually only visibly artefacts the render when polygons penetrate.

Sample Source Code:

;Z-buffered, flat shaded polygon routine, using a floating-point z-buffer
sub zbuf(poly():coord, z():float, zbuf(,):float, color:pixel)
[
  ;:Declare our object vars.  We will only create them   :
   :once, then reset their values to reduce the overhead :
   :of always creating/destroying objects.               :;
  [leftedge, rightedge]:static object scanconvert
  [leftz, rightz]:static object interpolate

  ;Define the line-drawing subroutine
  sub doline(scanline:int, x1:int, x2:int, zleft:float, zright:float):inline
  [ linez:static object interpolate
    linez.reset(zleft,zright,x2-x1)
    for x:int,x1,x2,1
    [ ;Z-buffer compare
      if z<zbuf(x,scanline)
      [ putpixel(x,scanline,bitmap(linetex.current))
        zbuf(x,scanline) = z
      ]
      linez.step
    ]
  ]
    
  ;FindTopVertex will return the array index of the top-most vertex
  top = FindTopVertex(poly())
  left = wrap(top-1, dimension(poly()))
  right = wrap(top+1, dimension(poly()))
  startline = poly(top).y
  endline = poly(FindBotVertex(poly())).y

  ;Set up our object vars
  leftedge.reset (poly(top),poly(left))
  rightedge.reset(poly(top),poly(right))
  leftz.reset (z(top),z(left), poly(left).y  - poly(top).y)
  rightz.reset(z(top),z(right),poly(right).y - poly(top).y)

  for y:int,startline,endline,1
  [ ;Draw a Z-buffered scanline of the poly
    doline(y, leftedge.x, rightedge.x, leftz.current, rightz.current)

    ;Increment our interpolators
    [leftedge, rightedge, leftz, rightz].step

    ;See if we've finished the left edge
    while y==poly(left).y
    [ oldleft = left
      left = wrap(left-1, dimension(poly()))
      leftedge.reset(poly(oldleft),poly(left))
      leftz.reset(z(oldleft),z(left),poly(left).y-poly(oldleft).y)
    ]

    ;See if we've finished the right edge
    while y==poly(right).y
    [ oldright = right
      right = wrap(right+1, dimension(poly()))
      rightedge.reset(poly(oldright),poly(right))
      rightz.reset(z(oldright),z(right),poly(right).y-poly(oldright).y)
    ]
  ]
]

object interpolate
[ default [vartype = public]

  [current, delta]:float

  sub reset(z1:float, z2:float, numsteps:int)
  [ current = z1
    delta = (z2-z1)/numsteps
  ]

  func step:float
  [ return (current = current + delta)
  ]
]

See also S-buffer.


Sorting Algorithms


Jeffsort Algorithm

This is a simple, fast, brute-force, constant-time sorting algorithm that operates on sets of integer-type data. A jeffcount array (effectively a histogram), whose subscript range is the range of the input data, is constructed and initialized to zero. For each element in the source stream, its jeffcount number is incremented. After the source stream has been counted, the jeffcount array is linearly progressed and the subscript value is output as many times as its respective jeffcount number.

Sample source code:

In BASIC:

'Jeffsort an array of numbers, given array size
'  and the range of array contents
SUB jeffsort(array(), arraysize, range)

'Allocate and initialize the jeffcount array
DIM jeffcount(0 TO range)
FOR x = 0 TO range: jeffcount(x) = 0: NEXT

'Jeffcount the input array
FOR x = 0 TO arraysize
  INC jeffcount(array(x))
NEXT

'Put the sorted numbers back into array()
counter = 0
FOR x = 0 TO range
  WHILE jeffcount(x)  0
    array(counter) = x
    INC counter
    DEC jeffcount(x)
  WEND
NEXT
END SUB

In Gigo:

;Jeffsort an array of numbers, given array size
; and the range of array contents
jeffsort(array():int, arraysize:int, range:int):sub
[ ;Allocate and initialize the jeffcount array
  jeffcount(0 to range:int):int = 0

  ;Jeffcount the input array
  with array, inc jeffcount(listval)

  ;Put the sorted numbers back into array()
  counter:int = 0
  with jeffcount, while (pdec listval)0, array(pinc counter) = listnum
]
;isn't this a nice language? ;)

Credits for this one go to Jeff Franklin.


Quick Sort Algorithm


Compression Algorithms


Fractal Compression


Huffman Encoding


JPEG Compression


LZ77 Compression

Many people who come up with their own compression discoveries usually rediscover this one. It gives very good compression on most types of uncompressed data. As a file is read in, the previous data (called a "window", typically 32k or so) is buffered to check if the current data has recently occured before in the file. If so, only a pointer to the previous data and length to be repeated is stored instead of the rendundant data.

Here's a sample LZ77 Compression process. The example data here does not compress. It does, however, show how LZ77 works. You can imagine how nicely this will work on large files:

Input:   1 2 3 4 5 2 3 4 3 2 1 2 4 3 2 1 2
         1)        2)    3)  4)  5)

Process: 1) 5 Original bytes 1 2 3 4 5
         2) 3 Repeated bytes from 4 bytes before
         3) 2 Original bytes 3 2
         4) 2 Repeated bytes from 10 bytes before
         5) 5 Repeated bytes from 5 bytes before

Output:  0005 01 02 03 04 05 FFFC 0003 0002 03 02 FFF6 0002 FFFB 0005
Using 16-bit signed pointers like this, positive "pointers" indicate a run of original data, and negative pointers point back to the data to be copied, followed by a 16-bit length. The negative pointer's greatest value is -32k, hence the size of the window, and the 16-bit length allows any length of data to be replicated from the window.


LZW Compression


MPEG Audio Compression


MPEG Video Compression


RLE Compression

Run-Length Encoding is by far the easiest compression method to implement, and gets average compression, but only with the right types of data. The compression ratio relies on the amount of repeated data atoms in the data. Usually, compressed data is stored as a counter, followed by a data atom (pixel, byte, word or whatever). This means that, in the source stream, the data atom was repeated as many times as counter dictates.

One obvious condition to note is when consecutive data atoms have different values, RLE actually expands the data size because it would require a counter value of "1", and the data atom to be repeated once, effectively doubling the size of the data. To prevent this, most implementations of RLE have a bit flag in counter that specifies that the counter number of bytes that follow are raw, copied data. Thus, a random sequence of atoms would only add 1 atom to the encoded data size, instead of doubling it.

Here's a sample Run-Length Encoding Process (in hexadecimal):

Input:   01 03 03 03 03 03 03 03 03 03 08 07 06 05
         1) 2)                         3)
    
Process: 1) 1 raw byte  of 01
         2) 8 run bytes of 03
         3) 4 raw bytes of 08 07 06 05

Output:  81 01 08 03 84 08 07 06 05
As you can see, the high bit of the code byte (80 in hexadecimal) is added to the length of the following data if it's raw. If the high bit isn't set, the value of the code represents how many times to repeat the data atom.


All pages were generated with a text editor.
All images (except counter art and host ads) are generated, owned and (c) by
me.

Home - Files
1