This document describes the operations perfomed on the data deep in the heart of SliceViewer (mostly in the "DataBlock" class). For a more concrete overview of input data types and syntax, see the usage document.

**Model Data Representation**

The object is represented as a three-dimensional array of density values arranged orthogonally in rows, columns, and planes to form a block of data in space. Each density is a single byte from 0 (black) to 255 (white).

Since the density at each point is an approximation to the density of the entire cube surrounding it, this density is called a voxel (or volume element, taxonomically derived from its two-dimensional analog, the pixel, or picture element). NMR imagers work by imaging a cross-section of the patient, then translating the patient slightly (a few millimeters) and imaging another cross-section. According to [Hornak, 1997], NMR imagers typically take 256x256 data samples. The Fast Fourier Transform, used to process NMR signal data into an image, is most computationally efficient when run on data whose size is a power of two. A typical data set, then, might consist of one hundred 256x256 planes of data, arranged as below.

Of couse, the program is capable of taking its input data in any size and inter-planar spacing.

**Main Loop**

The main loop of the program consists of:

- Check for user interaction.
- If an output window needs to be redrawn,
- Draw the block of data in an offscreen buffer
- Copy the buffer to the window.

**Graphics Fundamentals**

In the program, we define two separate right-handed coordinate spaces data space, centered on a corner of the density data and measured in individual voxels; and screen space, centered on the top left-hand corner of the display window and measured in screen pixels. Using homogenous coordinates [Watt, 1993, p. 3], we can use a single 4x4 matrix to map data space to screen space (the fourth coordinate implicitly taken as 1, to allow translation to be represented in the matrix). This matrix can then be inverted to map points from screen space back into data space.

To project the three-dimensional screen space onto the two-dimensional screen, we use the simplest possible projection system isometric projection. In this system, the z coordinate of three dimensional points is simply ignored, and the x and y position is plotted on the two-dimensional screen. The main advantage conferred by this system is that objects do not shrink with increasing distance, allowing us to measure the size of objects without regard to position. For this reason, an isometric projection is commonly used in scientific visualizations of this kind. By contrast, our eyes (as well as most cameras) see things in perspective, where far things look smaller than close things. This makes it difficult to measure objects since we must take distance into account.

To color each pixel on the screen, we must first find the location in the block of data which corresponds to this pixel. Then we can apply our interpolation procedure to find an approximation to the density of the block of data at that location.

**Pixel Pushing**

To render this cross-section of the object to the screen, the program must first determine what section of the screen intersects the block of data. To do this, it assembles a polygonal intersection region from the intersecting line segments of the block's faces. These line intersections of the faces come, in turn, from each face intersecting its edges with the plane. The intersection of these lines and the slicing plane is derived in appendix A. These point intersections are assembled into a line segment intersection for each face, the line segments assembled into a polygon.

This polygon intersection is then converted from line segments into spans of pixels running along the horizontal axis, and quantized to individual pixels (that is, the endpoints of the intervals are rounded to integers). This intersection is simultaniously clipped to the boundary of the computer screen. The intersection of the block of data and the slicing plane is now represented as a collection of horizontal line segments. These line segments are generally referred to as "scan lines". There is one scan line for each y coordinate of the screen.

Once this process--referred to as rasterization--is complete, the endpoints of each scan line are mapped from screen space to a location in data block space using the inverse mapping matrix. Because the mapping between spaces is linear (after all, it is accomplished using a matrix), we can save significant computational effort without loss of accuracy by only inverse-mapping the endpoints, then linearly interpolating locations in the data block between them. The interpolation procedure is then called upon to generate a density value at this location, and this density is displayed to the screen.

**Efficiency Tricks**

Making the program faster is the use of fixed-point math on the innermost
loop. The fundamental task of this program, as outlined above, is
to map a line of voxels in 3D space onto a horizontal line of pixels in
2D space. This is accomplished in

DataBlock.renderIntersectSpan() incrementally, as

for (x=xmin;x<xmax;x++)

{

backPixels[offset+x]=data[(sx>>>16)+nx*(sy>>>16)+nxy*(sz>>>16)];

sx+=dx;sy+=dy;sz+=dz;

}

Where the sx, sy, sz and dx, dy, and dz are the cartesian coordinates
of the source point and direction of source movement, stored as a 16.16
fixed-point integer. In fixed-point notation, we avoid decimal values
(like 2.34) by implicitly shifting all calculations by some basis-- for
example, all calculations could be performed in cents instead of fractional
dollars (in which case the basis is 100). Since computers operate
in binary, it is most efficient to choose a basis which is a power of two.
In the example, the source coordinates are stored with a basis of 65536
(2 to the 16th power), so right-shifting the numbers by 16 bits extracts
the integer portion of the fixed-point number. That's what's going
on with the "sx>>>16" operations (notice the use of unsigned bitshifts,
saving one entire clock cycle per calculation!)

Return to SliceViewer table of contents

**Appendix Intersection of data
block and slicing plane**

The data blockís points are mapped using the transformation matrix into screen space, whoseandaxes run along the slicing plane. Because finding the intersection of a rectangular parallelepiped and the planeis difficult, we break the data block down into its six quadrilateral faces.

These faces are then intersected with the planeby further decomposing them into four line segments. Each line segment is then intersected with the slice plane in the following way. Each line segment can hit the slice plane in at most one point.

Input: Vector; The line segment endpoints

Output: boolean intersection; Vector to the intersection point.

*1.) *
(where is a unit vector in the
direction of the axis)

*2.) *intersection true if ,
false otherwise

That is, we parameterize the line segment with the variable where any point on the line is

*V.)*

Letting and solving for t
in *V.)* results in *1.).* If ,
the slicing plane and the line segment intersect between the endpoints
of the line segment, at the point .

**References**

Hornak, Joseph. 1997. *The Basics of MRI *http://www.cis.rit.edu/htbooks/mri/

Watt, Alan. 1993. *3D Computer Graphics, 2 ^{nd}Ed*.
New York: Addison-Wesley.