Version 2 (modified by martin, 10 years ago)



Follows is an outline for a proposed volume rendering NodeKit from the OpenSceneGraph. The project awaits funding approval. If you'd like to help fund or contribute to this development please email Robert Osfield.

The Digital Learning Foundation is currently coordinating an effort to get funding for the osgVolume, details can be found at:

If you wish to support this project you can make a donation here :

Phase one:

1) Interoperability:

a) DICOM reading
b) Integration with !Present3D and other OpenSceneGraph based viewers.
c) ASCII and binary support for reading and writing osgVolume scene graphs.

2) Rendering:

a) Multi texture bricks - arranged as a multi-resolution hierarchy
b) Transfer functions:

i) pre-computed on CPU,
ii) encoded into 1D textures
iii) computed on GPU as part of a shader

c) Handling of mixed data types - polygons, lines, text and volumes in one space
d) Support for range of hardware/driver capabilities

i) Standard Texture3D, with a range of max texture sizes
ii) ARB vertex and fragment program
iii) OpenGL 2.0 Shader Language
iv) NVidia's compressed 3D textures

e) Clipping planes + boxes
f) Polygonal segmentation
g) Automatic quality control - render at high speed/lower quality when moving vs high quality techniques when rendering slowly.
h) Dynamic Video Resizing.

3) Data processing:

a) Iso-surface generation
b) Length, Area and Volume computation
c) Image Processing:

i) Biasing / Transfer functions
ii) Flood fill segmentation
iii) Manifold segmentation

4) User interface:

a) Support for mouse, keyboard and gamepad in an interchangeable way
b) Control of eye point
c) Control of clipping planes/boxes
d) Control of transfer function curves and colours
e) Annotation
f) Flood fill segmentation control
g) Isosurface generation/segmentation control
h) Measurement of lengths, areas and volumes

i) File selection, quality specification

Second phase:

1) Interoperability

d) DICOM writing
e) Full DICOM system integration
f) 3rd Party tool integration i.e. browsers, other medical tools

2) Rendering

i) Volume Paging
j) Multiple GPU rendering + compositing
k) Cluster rendering
l) 3D video texturing via either of:

i) custom stream 3D texture format
ii) 2D video stream built to stream one or more slices at one time to build up animated 3D texture.

3) Data processing

d) Image processing cont.

i) Sharpening
ii) Edge detection
iii) Smoothing
iv) Correlations

4) User Interface

j) Control of the above phase two items