my volume renderer currently works perfectly with 8bit unsigned char volume and an bit unsigned char transfer function.
Now I want to change to another texture format, for example to a 10bit volume. For this case a unsigned short texture
should be enough. So what I did it, that I convert my volumes to unsigned short and scale them to the range [0…4095].
I also have changed the transfer funtion to this range. So the transfer function now has up to 4096 bins.
Both, the volume texture and the transfer function are unsigned short. And their corresponding textures are also unsigned short.
But when the program runs, there is nothing to see.
The volume texture is created with:
glTexImage3D(GL_TEXTURE_3D, 0, GL_R16, size.x, size.y, size.z, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
and volume upload:
glTexSubImage3D(GL_TEXTURE_3D, 0, offset.x, offset.y, offset.z, size.x, size.y, size.z, GL_RED, GL_UNSIGNED_SHORT, data);
The transfer functions is created and uploaded the same way but with glTexImage1D and glTexSubImage1D of course.
I don’t understand where the problem is. It shouldn’t matter if I use the 8bit or 16bit format.
GL_R16 is a normalised format, i.e. it holds values in the range 0.0 to 1.0 inclusive. When you upload data as GL_UNSIGNED_SHORT, a value of 0 is converted to 0.0 and a value of 65535 is converted to 1.0. A value of 4095 would be converted to ~0.0625.
Maybe you’re already allowing for that in the shader; I don’t know.
and thank you for your help.
Originally I used the new glTexStorage. But for some compatibly reasons I moved back to the glTexImage.
I thought that the data I upload tih glTexSubImage is normalized to the range of the data I deliver (?) .
So I though tthat , when I pass data from 0 up to 4095, that this data will be normalized to [0 … 1.0].
But you are right, that 4095 is just round about 0.0625. I did not consider that in my shaders. Maybe that’s
the reason I can’t see anything with that texture mode constellation. I’ll give it a try.
@“Is GL_UNSIGNED_BYTE in glTexImage3D a typo? It should be GL_UNSIGNED_SHORT.”
The passed pointer is null, so I passed an appropriate data format. I think the internal format is the important part here.
Or should it match to the internal format ? With the new glTexStorage they decoupled the data part from the texture
creation part. But for glTexImage, some internal formart do not work with the given data format although the data pointer is null.
It’s normalised to the range of the type. So for GL_UNSIGNED_BYTE, 255 is 1.0, for GL_UNSIGNED_SHORT, 65535 is 1.0, etc.
Indeed. If you’re not uploading data (i.e. the data pointer is null and you’re not reading data from a buffer object), the type and format parameters don’t matter except that they need to be consistent with each other and with the other parameters (i.e. whether or not a parameter combination is invalid doesn’t depend upon whether data is being uploaded or you’re just allocating storage).
For colour formats, it’s valid to supply data with more components than the texture (extra components will be ignored) or with fewer components (missing components will be 0.0 or 0 for R, G or B and 1.0 or 1 for alpha).
I scaled the volume data to the range of [0 … 65535] and disabled the transfer function texture fetch in the shader. The volume shows up just fine.
So far that works. The transfer function is created this way: glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA16, size.x, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr).;
Then created I linear ramp transfer function with starting value (0,0,0,0) and end value (65535, 65535, 65535, 65535). But this results in a smooth black cube instead
of the volume I usually would expect. Any advices ? So, now the volume data and the transfer function data are within the same range as it was before with the
8bit unsinged char textures.
I found the problem. So stupid.
Usually when I created the 8Bit texture I have 256 values. So I also have a texture of this size (256).
Now with the 16bit Transfer function I created a Texture with length 65536. This is impossible.
So I created a texture with a size of 4096 and created a linear ramp from 0,0,0,0 to 65535 from starting index 0 up to 4095.
Looks ok but I dont know if it is correct. The volume data i scaled from 0 to 65535 and the transfer function as well but within
a smaller range (0 … 4095).
I think I got it.
As GClements mentioned above the maximum value of 4095 is normalized to the range of the data type unsigned short and will be around 0.0625.
So let’s assume that I scale the volume to 0…4095. The transfer function has alos values from 0,0,0,0 to 4095,4095,4095,4095 and a texture size of 4096.
According to the range of unsigned short, 4096 is 1/16 of 65536 (the ushort range). So I multiplied the voxel value with 16 to get the correct value (idx)
for the transfer function. Because the maximum normalized value of the volume is 4096/65536. Then the value from the transfer function fetch will also be multiplied with
16 because the maximum normalized values here are also 4096/65536.
It works, but I think I loose some precision here. I could use the maximum 1d texture with to get more precision. For me it s 16384, so its online 1/4 of the ushort range.
are there any suggestions when working with 12 bit volume data ? At least we have to use unsigned short as data type for the textures. I could upscale
to 0 and max_of(unsigned short)-1. But what with the transfer function data ?.
are there any suggestions when working with 12 bit volume data ? At least we have to use unsigned short as data type for the textures. I could upscale to 0 and max_of(unsigned short)-1. But what with the transfer function data ?.
How are you using this data?
what do you mean with how I’m using the data ?
Usually the back projection of the CT volume happens within a float volume. Let’s say that we have projection data with 12 bit resolution. When reconstructing a volume
one usually get some large values from somewhere minus float up to some big number positive float with in the volume. So I could either scale that in the range of 8 bit
with lots of quality losses or I can scale it in some bigger range with fewer quality losses. When I scale the volume data within a range of 12 bit, the voxel values
ranges from 0 up to 4095. But as you said before, when uploading data to the texture, the values get normalized with respect to the range of the data type. The value of 4095 is around 0.0625 within the
normalized size of unsinged short. The transfer function should have the same range as the volume. That means that I only use 1/16 of the unsigned short range.
Is there something that I miss ?
How are you using the “transfer function” data? You’re uploading it to a texture, then … what?
If you’re reading values from the 3D texture as floats, then using those values to perform a lookup into the 1D texture with texture(), then you need to scale the value by 16 in order to use all of the data in the 1D texture (technically it should be by 65535.0/4095 =16.003663…, but 16 is probably close enough).
I tried something similar before. I had a 3d volume texture of type unsigned short with values from 0 … 16383 and a transerfunction with bins from 0 … 16383. The scale factor is about four.
I had to multiply the fetched volume value by 4 and the resulting valure from the transfer function as well. Because 16383 is around 0.25 in the normalized range of unsigned short, right ?
Let’s assume I’ve got a one dim array of length 4096 with values from 0… up to 4095. Then what would be the value of this texture in the shader, for example if I fetch at position 1.0
texture(texTransfer, 1.0). I guess it will be 0.25 and not 1.0 ? Or am I wrong ? Sorry I’m a bit confused.