Texture3d size in volume rendering

I’m working on a volume raycast application and I’m having a problem:
I’m creating 3D texture of 512x512x96x1 size and it works fine, but if I increse it I will see only black box instead of volume model… I was looking for any solution but all I’ve found was to check the size of proxy 3D texture - and it looks ok. (http://www.opengl.org/resources/faq/technical/texture.htm#text0120)

I know that my video card (nvidia quadro fx 1600M) should supports much more textures that 24MB mentioned above - I am able to create procedural texture in RenderMonkey and 512x512x256x4 (256MB) works just fine. What’s more I’ve already worked with another application (written in C, C# and Java) that uses 512x512x125x2 (~62MB) 3D texture.

Is there a limit of the 3d texture size? If so, why I experience two different limitations?

I have windows 7, I’m programming with VS2010 in c# with OpenTK.

Please, if you have any suggestions let me know.
Thanks in advance!

Ok, the problem was that I have send the byte[] array to texture as a unsigned byte. After I changed it to float it started to work just fine.

Can anyone explain why ~24MB was maximum of the texture size with uchar pixel size?

I think the problem is in your texenv settings. you should use
PixelPackAlignment = 1 and PixelUnpackAlignment = 1. The black
is like a “sign”. It means that there is no content or some
settings are wrong. When working with 8bit raw files you should use

filter: GL_LINEAR
internal format: GL_INTENSITY

Furthermore, check for opengl errors with glGetError()
Note that float datasets in most cases store 12bit values and not
16bit values.
By the way uchar files arnt restricted to 24MB. I can load much greater file into my raycaster (uchar files).


Yes, I have them turned on; Correct me if I’m wrong: I should enable these both before creating the texture?

Yes, I did that. I like to add another alpha channel for better parameterization.

I have check it - no errors.

Good to know that, thanks!

I have loaded 60MB before without problems. But I was not working with OpentTK then.

I will make some checks leter.

Thanks for help, I’m appreciate.


what do you mean by

Yes, I did that. I like to add another alpha channel for better parameterization.

When we talk about raw files then there is no alpha channel. It’s just a 3d grayscale “image”.

Yes, I have them turned on; Correct me if I’m wrong: I should enable these both before creating the texture?

Yes, you have set them before upload the texture. You first
create the texture

glTexEnv etc.

There you can find two code samples. SimpleSlicer and GPUSlicer. Check out SimpleSlicer its straight forward.

And another hind for your transfer functions. better you use
CLAMP_TO_EDGE instead of CLAMP_TO_BORDER. For me, CLAMP_TO_BORDER cause artifacts with raycasting while CLAMP_TO_EDGE doesnt.

I don’t know wether it makes sense or not, but try to upload your
texture per slice with glTexSubImage3D(…).

Did you create you own (synthetic) 3d volumes or did you use
a raw file download in the internet ? There is a 8bit rawfile
in the example mentioned abov (SimpleSlicer). It’s the well known engine block. 256256256 8bit. Maybe you check out this


Thank you very much. I’ll definetely use your advise!

To be clear: I want to use alpha channel to my raytrace algorithms as another parameter. Just for tests. Of course I’ll replace them by 1D transfer function.

My 3d textures were at first just synthetic generated sphere :wink: , but I want to apply some medical data I have and I want to work on.

I’ll check glTexSubImage3D function. It may be usefull.

Thanks again!

if you have an additional alpha channel then maybe you cant
use the format luminance or intensity. You might use GL_RGBA
instead. Don’t hesitate to ask if you have any further questions.
Btw. you also can check out www.voreen.org. It’s an open source
volume rendering framework. There you might have a look into the
shaders, just for beginning. Using raycasting for volume rendering you need to reconstruct you entrypoints when the nearplane intersects with your boundingbox (the colored cube used for entry exit point generation). Otherwise you cant move
your camera inside the volume.


With dataset sized 512x512x125x2 I used LuminanceAlpha internal and external voxel interpretation. RGBA would be more expensive… anyway i fill probably end up with this RGBA…

Thanks for interesting approach to rendering from inside the volume. I will look for and read about it and finally try to implement myself.

Thanks again - these was very helpful informations!

Yes - the problem was the texenv property:

GL.PixelStore(PixelStoreParameter.UnpackAlignment, 1);
GL.PixelStore(PixelStoreParameter.PackAlignment, 1);

that has done the trick for me. I can now use higher 3d texture size:)

Again thank you very much!

Hi, good to know its working now.
Of course you can use default pixelalignment (4)
but then your volume textures have to be
power of two dimensions. But for our luck
modern hardware supports npot textures.