Visualization means generating and rendering objects in a meaningful way. In an interactive graphics application like a CAD program, emphasis is on generating realistic images of physical objects and environments. The process of rendering a picture includes several tasks:
The electron gun emits a stream of electrons which is accelerated toward the phosphor-coated screen. On the way to the screen, the electrons are forced into a narrow beam by the focusing mechanism and are directed toward a particular point on the screen by the electrostatic or magnetic field produced by the deflection system. When the electrons hit the screen, the phosphor emits visible light.
A raster scan starts at the upper left corner of the screen. During the left-to-right sweep, the beam intensity is modulated to create different shades of gray.
In raster display, the image has to be refreshed at least 30 times a second. The refresh memory is arranged as a two-dimensional array. The entry at a particular row and column stores the color of the corresponding position on the screen in a simple one-to-one relationship.
Simple objects are often used to compose more complicated objects or appear repeatedly in various positions, orientations, and sizes. The use of transformations such as translation, rotation and scaling, which are represented as matrices, can save large amounts of memory space and increase the efficiency of handling objects that are in motion, e.g. animated.
Each object can be represented as a collection of its parts; at the same time, each object itself can be, in turn, a part of another larger object. The relationships between object and its parts can be described in terms of a transformation hierarchy.
IT: Instance transformation
For a realistic rendering of an object, properties of the surface material of the object such as emission, reflection of ambient and diffused light, speculation, transparency, and texture have to be specified (see the following section on lighting and shading).
To depict the computational model of an object on the screen of display monitor, projections among three different coordinate systems as well as the procedures for clipping and removal of hidden lines and surfaces are required.
World coordinate system (WC): A device-independent coordinate system used by the application program for specifying graphical input and output.
Normalized device coordinate system (NDC): A device-independent intermediate coordinate system, normalized to some range.
Device coordinate system (DC): A device-dependent coordinate system which corresponds to the screen of the display monitor.
A perspective projection can be represented as a 4x4 matrix. A complete specification includes the camera position, the target direction, the focal length, view up vector, and the projection plane normal.
The front clipping plane and the back clipping plane determine a finite view volume. These planes, both of which are parallel to the view plane, are specified as front_distance and back_distance from the view reference point. Side clipping is also needed to cut off the projection of the objects which are outside or partially outside the viewing window (see the figure on the following page).
The task is to remove invisible surfaces and lines from the projection of 3D objects on a 2D plane. One approach is to think of the object as a collection of n polygonal faces and to decide which face is visible at each resolution point on the display device. Doing this for one resolution point requires the examination of all n faces to determine which is closest to the viewer. Another approach is to compare each of the n faces to the remaining n - 1 faces in order to eliminate faces or portions of faces that are not visible.
The next step in creating an image, after hidden surfaces have been removed, is to shade the visible surfaces, taking into account the light sources, surface characteristics, and the positions and orientations of the surfaces, and the camera.
Dull matte surfaces exhibit diffuse reflection, scattering light equally in all directions, so that the surfaces appear to have the same brightness from all viewing angles. The diffuse illumination is proportional to the cosine of the angle between the direction to the light source and the normal to the surface.
The ambient light has uniform brightness in all directions and is caused by the multiple reflections of the light from the many surfaces present in most real environments.
Specular reflection is caused by the reflection of a light source on a smooth surface. In a perfect specular reflection, all light is reflected, not just certain colors, and it is reflected in a single direction, not spread out in all directions. The light from the specular reflection can be seen when the direction from the object to the eye is the same direction as that of the reflected light.
A transparency effect is obtained by showing a portion of the light which is reflected from a hidden object. The larger the fraction of this hidden light which is allowed to seep through the object, the more transparent the object appears.
The problem of shadows is very similar to the hidden surface problem. A shadowed object is one which is hidden from the light source. Shadow and hidden surface calculations are often performed together for efficient computation.
There are two types of surface details: color and texture. Color detail is applied to a smooth surface without appearing to change the geometry of the surface, while texture detail gives the appearance of an altered surface. Color detail can be introduced with a surface detail polygon to show features on a base polygon, or by mapping a digitized photograph of detail onto a surface. Texture can be simulated by disturbing the normals of subdivided polygons in a surface or with fractal surfaces generated by midpoint displacements.
Radiosity based shading techniques compute the light arriving at a surface by analyzing the distribution of light energy in a scene. Ray tracing, a rendering technique, computes the color of a pixel by casting an imaginary ray back into the scene from a viewpoint, until it hits an object, and then recursively casts rays toward light sources and other objects.
To show smooth motion, it is necessary to display a completely drawn image for a certain time (e.g., one or more 30th of a second), then present the next frame, also completely drawn, during the next time period, and so on.
The motion of a camera is handled by a projection matrix. The motion of objects and light sources is done by means of transformation matrices. Each type of motion includes many parameters that must be specified. In real time animation, the interaction should feel perceptually natural to the viewer. Various input devices such as a locator, valuator, and body motion recorder, are used to control motion in animation.
Before a second image can be displayed the frame buffer must be cleared. When a single buffer system is being used, the screen will temporarily be blank before the next image can be generated, which means that during animation, a visible flicker can be seen. Double buffering can solve this problem. The system+s standard bitplanes are divided into two halves: one of which is displayed, while drawing is typically done in the other. When the image is completed on the back buffer, the buffers are swapped. The back buffer becomes visible and the front buffer moves to the back, is cleared and begins to generate the next image.
Since interactive computer graphics are becoming more and more important, some new techniques are needed to make the drawing and editing of graphic scenes faster and more comfortable. One of the ways to do this is to use a hierarchical coordinate system and coordinate transformations. This technique can be used to show an object in any position in the 3-dimensional modelling world on the 2-dimension computer screen. Every object is located in its own coordinate system, which is located in a world coordinate system. To find the position of every single point of a 3D-object on a screen several transformations from one coordinate system to another need to be done. The following figure shows an example of a coordinate transformation including the number and sort of transformations that are needed to make an object in a three dimensional modelling space visible on a screen.
Many 3D-coordinate-transformations (like scaling, mirroring and rotation) can be described by a matrix multiplication.
This matrix transforms a point (x1,y1,z1) in 3D space to another point (x2,y2,z2) by means of multiplication. Some other transformations (like translation or moving) are described using vector additions.
We would like to be able to treat these transformation in a consistent way, so that we can combine them easily. If points are expressed in homogeneous coordinates, all of these transformations can be treated as multiplication's. Therefore we need to extend the transformation matrix from a 3x3 matrix to a 4x4 matrix.
In this way we can describe most of the transformations we are interested in, like translation, rotation, scaling, mirroring, and also perspective viewing and 2D-window-viewing with the same consistent format.
To make the whole transformation form the object to the screen, you must multiply all of the matrices for the individual transformations in the right order. This simplifies the matter because the programmer does not have to worry about the displaying of objects, rather only about making the right transformation matrices. To support the idea of the hierarchical object grouping, the 'graphics library' of Silicon Graphics and other graphical subsystem use stacks, for the transformation matrices that have to be multiplied.
Translator: from AutoCAD-Data into PolyTRIM-format
Since there are so many software programs available that use different formats for their data, translator utilities are becoming increasingly important. In this exercise you are required to write a translator in LISP that converts AutoCAD data into the simple PolyTRIM-CLRView-format (.clr). 3Dfaces, meshes, lines and polylines must be translated. In order to translate 2D-elements, like lines and polylines, a thickness must be specified. The conversion of blocks, arcs and circles is optional. Every layer must be written out to a separate file. The layer name should be used as the file name (include the extension: .clr). Design a proper interface for your translator utility. An example of a .clr file is in '/homes2/prog/ausgabe/13_uebung.clr.
Test the success of the translator utility by starting PolyTRIM on an SGI machine and loading the CLRview files that your program has generated. (Enter polytrim in a shell, use the command CLRview LOAD under the Tools-menu. Select Raw Polygon as the CLR Format and then select the new files).
The following is a sample of a .clr format file. The lines have been numbered to help clarify the subsequent specifications.
Prog Content Vorwort ..1.. ..2.. ..3.. ..4.. ..5.. ..6.. ..7.. ..8.. ..9.. ..10.. ..11.. ..12.. ..13.. Appendix