Version 3 (modified by robert, 9 years ago)



The osgVolume development as part the Joint Medicial Visualization Project that is being carried out between Aberdeen University, the Digital Learning Foundation and OpenSceneGraph Professional Services. The project runs from August 2008 to July 2009, and will take phase one of the work below.

If you are interested in use, test or help develop osgVolume then please contact Robert Osfield (robert at openscenegraph dot com), directly through email, or via the public osg-users mailings list, and/or by signing up to the interested parties page, with a note of your own project scope and your specific needs/interests in volume rendering functionality in the OpenSceneGraph.

Phase one:

1) Interoperability:

a) DICOM reading
b) Integration with !Present3D and other OpenSceneGraph based viewers.
c) ASCII and binary support for reading and writing osgVolume scene graphs.

2) Rendering:

a) Multi texture bricks - arranged as a multi-resolution hierarchy
b) Transfer functions:

i) pre-computed on CPU,
ii) encoded into 1D textures
iii) computed on GPU as part of a shader

c) Handling of mixed data types - polygons, lines, text and volumes in one space
d) Support for range of hardware/driver capabilities

i) Standard Texture3D, with a range of max texture sizes
ii) ARB vertex and fragment program
iii) OpenGL 2.0 Shader Language
iv) NVidia's compressed 3D textures

e) Clipping planes + boxes
f) Polygonal segmentation
g) Automatic quality control - render at high speed/lower quality when moving vs high quality techniques when rendering slowly.
h) Dynamic Video Resizing.

3) Data processing:

a) Iso-surface generation
b) Length, Area and Volume computation
c) Image Processing:

i) Biasing / Transfer functions
ii) Flood fill segmentation
iii) Manifold segmentation

4) User interface:

a) Support for mouse, keyboard and gamepad in an interchangeable way
b) Control of eye point
c) Control of clipping planes/boxes
d) Control of transfer function curves and colours
e) Annotation
f) Flood fill segmentation control
g) Isosurface generation/segmentation control
h) Measurement of lengths, areas and volumes

i) File selection, quality specification

Second phase:

1) Interoperability

d) DICOM writing
e) Full DICOM system integration
f) 3rd Party tool integration i.e. browsers, other medical tools

2) Rendering

i) Volume Paging
j) Multiple GPU rendering + compositing
k) Cluster rendering
l) 3D video texturing via either of:

i) custom stream 3D texture format
ii) 2D video stream built to stream one or more slices at one time to build up animated 3D texture.

3) Data processing

d) Image processing cont.

i) Sharpening
ii) Edge detection
iii) Smoothing
iv) Correlations

4) User Interface

j) Control of the above phase two items