VolRendYo! is a program for visualization of volume data. It features a depth-of-field effect for a slice-based direct volume renderer. This approach was proposed by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro. This program is meant to demonstrate that a depth-of-field effect helps the users to better perceive what is in front of what in volume renderings.
How to Use
Edit the “start.bat” to set your resolution, monitor refresh rate, full-screen state, and dataset index. Then double click the “start.bat”. Or start the volrendyo.exe directly to use the default settings. With a resolution of 1024×768, the NVidia GeForce 780GTX reaches approx. 20 fps.
Rotate the camera with mouse. Move the camera forward, left, backward, right with WASD. Move the camera up with Space and down with Ctrl. Keep Shift pressed for faster movement.
- Open the solution with Visual Studio 2015.
- Select the normal release mode. (debug takes quite long to pre-process the normals, other build are not set-up)
- Run in VS or start with batch file in release folder
- You can choose a volume data set per command line parameter
- We used GLM for all the math stuff with vectors and matrices.
- We used OpenGl with GLEW for all the OpenGl extension.
- We used GLFW for input handling, window stuff and OpenGl initialization.
- We used FMOD to play the music.
We re-use the slice-based volume renderer from the previous project Broken Magic. It bases on the GPU Gems Chapter 39 including the shadowing technique. Instead of sheep wool or clouds, it is now used for downloaded volume data from industrial or medical CT scans.
The program can load some pre-defined volume data sets. Only pre-defined because some data sets have a header containing the resolution and bit depth, others lack any header. Our program loads the binary information from the file into the main memory. There, it converts the data to a data usable by the graphics card. I.e. bit shift the usually 14 used bit in 16 bit from the file to 8 bit. Then, it calculates the gradient (normals) for each voxel, which is filtered with a simple box filter afterwards. The density information and the gradient are then sent to the graphics memory as a 3D texture with 4 components. The program creates a scene graph consisting of a node for the volumetric object, a circling light source, and a moveable camera.
Our renderer implements the process described in the paper by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro. For the slice-based rendering, we use quads as proxy geometry. These quads are rotated to face the camera. For each fragment of the slices, we sample the 3D-texture to get the density and normal of the corresponding voxel. Then, we use this information as input for the transfer function, a 2D texture. The result from the transfer function is the fragment’s color. We have two stacks of slices: one rendered back-to-front and one rendered front-to-back. The first uses the over operator to blend its fragments with the colors behind. The second uses the under operator. The results of both stacks are blended and displayed.
To achieve the depth-of-field effect, we don’t just blend the slice’s fragments with the fragments of the previous slice. Instead, we sample multiple times in a certain circle of confusion on the previous slice. The radius of this circle of confusion is determined by the slice’s distance to the focus slice and the strength of the depth-of-field effect.
About the project
We (Philipp Erler and Robin Melán) created this demo during the visualization 2 course of our studies at the TU Wien. We re-used parts from our engines for the real-time graphics exercise. The depth-of-field effect bases on the paper by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro.