Practical Tutorial 2 (Assessed) - Volume Rendering
The goal of this tutorial is to visualise a head CT dataset by
writing your own volume renderer. This coursework can be done individually or in groups of up to three.
The solutions should be
submitted via CATE by midnight Monday 7th November.
This Coursework can be found at:
http://www.doc.ic.ac.uk/~eedwards/teaching/2012/AGV/VolRenCoursework/
Learning outcomes
At the end of this coursework you will
- know how to implement a volume renderer
- understand how the transfer function can be used to highlight different structures and achieve different effects
- have implemented maximum intensity projection
- have implemented the over operator
- understand volumetric lighting
- know how to produce transparent rendering of volumetric data
Data
You will find a datasets HeadCT.vtk . This is
a vtkStructuredPoints datasets containing a CT scan from the
Visible Human Project
There is also a skeleton program VolumeRender.cpp and a corresponding Makefile. If you are using an Ubuntu 10 machine there is an alternative Makefile .
To compile the example simply copy these into a directory and type "make".
If you want to amend the Makefile to use
another version of VTK that's fine. To run the program type
"volumeRender HeadCT.vtk tmp.png". This renders a single slice
from the dataset on the screen and saves this rendering to
tmp.png.
The scalar values in the data range from 0 to around 2500. You
will need to explore the data to discover which values correspond
to which tissues. You can use the subsampled version for
(HeadCTsub.vtk) for experimentation but the final images should
come from the full dataset HeadCT.vtk.
The file VolumeRender.cpp contains 3 functions. A "main" which sets
up the display to render the 3D data into a 2D RGB image (range
0.0-1.0 for each colour). You should not need to amend the "main".
The TransferFunction takes the short data from the 3D image and
converts this to an rgb_alpha value. In this case it simply
copies the range of greyscales in the image to 0.0-1.0 in the
rgb image. You will need to adjust this TransferFunction to
achieve different renderings in the exercises below.
The VolumeRender function loops through the CT data and copies
slice 200 into the RGB image.
Exercise 1 - Maximum intensity projection
Amend the
VolumeRender function so that it loops through the y voxels. This
is effectively parallel projection along the y-axis and ray
accumulation takes place for every voxel. Write a new generic
"Composite" function that takes a current ray colour and combines
it with a value from the current voxel. In this instance, make the
Composite function provide the maximum value along the ray. This
should provide a rendering showing the skull and highlighting the
fillings in the teeth.
Exercise 2 - "X-ray" projection
Alter the Composite
function so that it implements the over operator. Amend the
transfer function so that the colour is always white, but the
alpha value starts at zero for some value and ramps up linearly
with increasing scalar value. The maximum value for the given
dataset should still be quite small to avoid whiting out the
rendering. Experiment with the values to provide a rendering that
looks similar to an X-ray.
Exercise 3 - Face and skull rendering
Change the transfer function so that it provides a reasonable skin
tone and so that the alpha value ramps up from 0.0 to 1.0 over a
small range at the value corresponding to skin. The rendering will
appear quite flat but should cover the area of the face. Choose
another threshold to provide a similar rendering of the skull. The
flat nature of these renderings underlines the need to incorporate
lighting.
Exercise 4 - Face and skull rendering with lighting
To provide lighting you will need to implement volumetric
gradients as described in the notes. For each voxel, calculate the
gradient by finite difference. As we are rendering in the y
direction you should scale the voxel colour by the dot product of
the normalised gradient with (0,1,0). A few things to note are
that you should use the absolute gradient and that you should only
provide lighting for gradients where the magnitude exceeds a
threshold. The gradient direction is not well approximated for
small gradient magnitudes. This method should produce decent
renderings of the face and the skin.
Exercise 5 - Combined face and skull rendering with lighting
Incorporating volumetric lighting, alter the transfer function to
include a spike at the level of the skin that reaches a
smaller alpha than 1.0 and the same ramp for the skull as exercise
4. This should produce a rendering showing the skull through
transparent skin.
Notes
Please incorporate all the images produced into a single
document. You should submit this along with your source code
VolumeRender.cpp and include a description of what thresholds and
opacities were used so these images can be reproduced. The source
file should include all the code to run any of the
exercises (commented out if necessary).
The VTK window includes an interactor that allows you to alter the
thresholds visible in the window by dragging the mouse. This is useful
for debugging purposes, but the PNG image is saved before any
interaction and you should leave it functioning in this way. Your
code, not the interaction, should produce the given result. Also, VTK
includes many helper functions for volume rendering. You should not
use these - the aim of the exercise is to write the volume renderer
yourself. This is quite a noisy CT scan so don't expect perfect
results.
YOU CAN DO THIS PROJECT INDIVIDUALLY OR IN GROUPS OF UP TO THREE
SUBMIT THE TWO FILES ON CATE - VolumeRender.cpp and report.pdf by
MIDNIGHT MONDAY NOVEMBER 7TH
Philip
Edwards Last modified: Fri Oct 21 02:23:59 BST 2011