Rendering
Rendering is the final process in a 3D image creation chain where a rendering engine reads the scene description (containing models, textures, camera and lights positions) and, by using a certain rendering algorithm, produces the final image. Rendering can be a fast process or a very slow process, depending on hardware used, algorithm efficiency and quality, complexity of the scene, number of lights used, resolution of the final image and other parameters. Usually the highest the quality, the longer the rendering time.
Here's a brief description of the most common rendering techniques:
Wireframe
Hiddenlines
Flat shading
Goraud shading
Phong shading
Z-buffer
Scanline
Raytracing
Radiosity
This is the simplest rendering algorithm and is usually the basic working preview of near all graphical modelers. It simply draws straight lines connecting the vertices of the polygonal mesh. Some modelers draws the lines with "depth cue", with far lines are drawn darker than near ones. Amapi has this feature.
Wireframe has the problem that often it's difficult to understand well the shape of the object. Hidden lines algorithm shows only the polygons that are visible from the point of view, increasing the solid perception of the object. It's slightly slower than wireframe.
This is perhaps the first shading algorithm developed. It's very simple and consist in shading each polygon of a single tint, depending by his angle with the light source. It can be done with "depth sorting" technique, where all polygons are drawed from farthest to nearest. This way the nearer polygons overlap the farther ones and this way backface removal is achieved. Although it can seem inefficient to draw polygons that are not visible from the point of view and that are covered by other nearer ones, it's still a fast technique and it's used in several 3D games like Tekken for example. The problem of depth sorting is that is not ever simple to examine two polygons and evaluate witch one is nearer. This problem due to rendering errors showed up as flipping polygons. Tekken is a good example and such flipping polygons are usually seen where different part of the bodies compenetrate.
Goraud shading introduce interpolation to reduce the visibility of polygons by smoothing the surface. It is probably the fastest smooth shading algorithm but it's quality is not good enough for final rendering.
Phong is a step beyond and it can handle texture mapping properly. It is widely used since it's fast and provide good quality, although it cannot handle real reflection and refractions.
Z-buffer is one of the techniques to achieve phong shading. It uses a buffer to store the Z coordinates of every point to be rendered on each polygon.
This is the engine of QuickDraw3D, LightWave 3D, Extreme3D.
This is another way to implement phong shading, but unlike Z-buffer, scanline renders one line of pixels at a time. This requires more memory because all the polygons that a certain scan line covers must be processed.
It is used by software like RenderMan, Electric Image, 3D Studio MAX.
It's perhaps the best compromise between rendering quality and rendering speed.
This is one of the most famous rendering algorithms and it allows the creation of stunning quality images. In his basic implementation it calculates the paths that light rays follows as they bounce between objects in a scene. Since most of the light rays never reach the camera is unuseful to keep track of all rays. So rays are tracked backwards from camera to light source. Since the calculation are so physically based, raytracing can calculate real shadows, reflective and refractive surfaces. Note that raytracing does not treat white light as the sum of a certain band of colors, so a raytraced refractive prism does not generate the rainbow.
Raytracing is much slower than all algorithms seen yet, and usually it's not used in 3D animation. For still images it's often chosen for his great rendering quality. Some apps that uses raytracing as they main (or only) engine: POV-Ray, Real3D,
Animation Master,
Imagine,
Presenter 3D.
There are some experimental software that follows other roads than backwards tracing. MIRO is one of them. I strongly encourage anyone to visit MIRO Home Page; stunning images!
Radiosity is one of the algorithms that take care of indirect illumination. In a scene not all the parts of the objects are direct hit by a light ray and with a renderer that cannot provide any form of indirect lighting these parts were rendered black. In classic raytracing indirect light is faked by shading these areas by a flat constant value. This trick can be acceptable for most purpose but there are some scenes (like architectural environments) that suffers too much for this approximation. Radiosity compute indirect light resolving the problem. The two main drawback of radiosity are the slowness, and the lack of specular lighting computation. To resolve this last problem it's often used in conjunction with raytracing and the two algorithm completes each other. Lightscape uses this approach and produces very impressive pictures. From version 3.0 even POV-Ray can perform radiosity.
The classification yet done is not so rigid in real implementations. For example Z-buffer and scanline can be mixed up and some renderers can feature different rendering engines to calculate a single image. LightWave 3D, for example, has both raytracing and Z-buffer but only the second is usually used when calculating animation since it's much faster. However in some scenes you may want to render a glass sphere and want the sphere to refract. The scene can be calculated in Z-buffer except the sphere that is being traced. This technique known as "selective raytracing" can use advantage of raytracing for some effects without the need to trace all the scenes.
Go to 3D Animation
Home > 3D Graphics > 3D Overview > Rendering
This page hosted by
Get your own Free HomePage