Creating a sampler3D texture using the cube data

Actually I am working on an opensource software which visualizes the chemical molecules so we use to create mesh from marching cubes/flying-edges. Now, I am working on a feature which involves volumetric rendering and will look something like this

image

How can I create a volumeData which is a 3D texture so I could use

uniform sampler3D volumeData

Can we use the cube data which we use in my marching cube? I am a bit confused on how I can use the cube-data/dimensions to create my volumeData?

@GClements any help, sorry for tagging you but it’s just how can I create the sampler3D from the data of the cube I get. The dimensions etc…

Creating a 3D texture and fill it with cube data doesn’t work ?

I am just asking the same. If it would work or not?

Define “work”. Creating a 3D texture is the easy part. What are you planning to do with it after that? Common approaches to volumetric rendering are rendering a set of quads with each quad textured using a slice through the 3D texture, or rendering a single quad and having the fragment shader use ray marching for each fragment.

Yes and if you use a geometry shader you render just one quad and then you can draw everything on each layer by assigning the layer number to the gl_Layer output.

Thanks for your response @GClements , it’s always a pleasure to receive help from you guyz. Really means a lot to me.

Actually, I have done all the steps correctly

I have two framebuffers side-by-side. I composite them onto the first framebuffer using a fullscreen shader. First, I enable face culling with gl.enable(gl.CULL_FACE) and set it to cull front faces with gl.cullFace(gl.FRONT). This allows me to draw only the back faces of the volume, and by doing so I can capture the depth of the back faces in the first framebuffer. I repeat this process for the second framebuffer, but this time I cull back faces with gl.cullFace(gl.BACK) to capture only the depth of the front faces.

Next, I pass these two depth buffers as textures to my final shader. For each pixel, if both the back depth and front depth have values, it indicates that the pixel lies within the volume. The difference between these two depth values represents the thickness of the volume at that point. I can then use this thickness to adjust the opacity of the pixel accordingly.

here’s my shader and I don’t know why I am not getting a correct volumetric cloudy look

#version 120

// Interpolated UV from the vertex shader
varying vec2 UV;

// Scene textures
uniform sampler2D inRGBTex;
uniform sampler2D inFrontDepthTex;
uniform sampler2D inBackDepthTex;

// The 3D volume data
uniform sampler3D uVolumeData;

// A 2D colormap texture (the “transfer function”)
// uniform sampler2D colormap;

// Some toggles from your pipeline (if needed)
uniform float inAoEnabled;
uniform float inAoStrength;
uniform float inEdStrength;

// Screen size (passed in from C++ code, if needed)
uniform float width;
uniform float height;

// Transfer-function range
uniform float transferMin;
uniform float transferMax;

// How many steps to take, etc.
uniform int   numSteps;       // e.g., 128
uniform float alphaScale;     // e.g., 0.1 or something similar

void main()
{
  // 1) Fetch the scene color
  vec4 sceneColor = texture2D(inRGBTex, UV);

  // 2) Fetch front and back depths 
  float frontDepth = texture2D(inFrontDepthTex, UV).r;
  float backDepth  = texture2D(inBackDepthTex,  UV).r;

  // Basic sanity checks: if the box is clipped or if front/back are invalid
  if (frontDepth >= 1.0 || backDepth >= 1.0 || backDepth <= frontDepth) {
    gl_FragColor = sceneColor;
    return;
  }

  // 3) Compute the total “thickness” in normalized [0..1] Z
  float thickness = backDepth - frontDepth;

  // Step size for the raymarch
  float stepSize = thickness / float(numSteps);

  // 4) Accumulate color over the ray
  vec4 accumulatedColor = vec4(0.0);

  // Raymarch from frontDepth to backDepth
  for (int i = 0; i < numSteps; i++) {
    // Parametric Z coordinate in [frontDepth..backDepth]
    float z = frontDepth + (float(i) + 0.5) * stepSize;

    // UVW in volume texture: XY from screen, Z in [0..1] (assuming the volume
    // is also in [0..1] for that axis). You may need to invert or shift if
    // your volume is mapped differently.
    vec3 uvw = vec3(UV, -z);

    // Sample the raw density or intensity from the volume
    float rawVal = texture3D(uVolumeData, uvw).r;

    // Map that raw value to [0..1] for a colormap lookup
    float cval = (rawVal - transferMin) / (transferMax - transferMin);
    cval = clamp(cval, 0.0, 1.0);

    // Fetch a color from the colormap — assume 1D colormap along X,
    // picking the center of Y=0.5 if it’s just a 1D gradient stored in a 2D texture
    vec4 sampleColor = vec4(1.0,0.0,1.0,0.5);

    // Scale alpha if you want the volume to be more or less transparent
    // (like your ALPHA_SCALE from the original code)
    sampleColor.a *= alphaScale;

    // Standard “over” alpha compositing:
    float remainingAlpha = 1.0 - accumulatedColor.a;
    accumulatedColor.rgb += sampleColor.rgb * sampleColor.a * remainingAlpha;
    accumulatedColor.a   += sampleColor.a * remainingAlpha;

    // Optional early-out if almost fully opaque:
    if (accumulatedColor.a >= 0.5)
      break;
  }

  // 5) (Optional) If you have toggles for AO or edges:
  //    For demonstration, we do something simple:
  if (inAoEnabled < 0.5) {
    // Example: make the volume darker if AO is disabled
    accumulatedColor.rgb *= 0.5;
  }
  // Scale by AO strength (could be done differently)
  accumulatedColor.rgb *= inAoStrength;

  // 6) Composite final volume color over the original scene
  //    Similar to “1 - alpha” logic you had:
  float oneMinusA = 1.0 - accumulatedColor.a;
  vec3 finalRGB   = accumulatedColor.rgb + oneMinusA * sceneColor.rgb;
  float finalA    = sceneColor.a + oneMinusA * accumulatedColor.a;

  // Write out final pixel color
  gl_FragColor = vec4(finalRGB, finalA);
}

here’s the output I get

can you please take a time to look at it and explain where I am wrong? I am always getting an empty box, and when I try to render any molecule it still doesn’t shows anything

Are you also switching the depth test to GL_GREATER so that you’re capturing the depth of the farthest face rather than the nearest? Unless the shape is known to be convex, you can’t assume that there will only be one front face and one back face for each fragment.

Why are you negating z? Assuming that the depth textures are unsigned, z will be positive. So using 3D texture coordinates with negative Z with a sampler3D will result in it wrapping (repeat or clamp, as configured).

Other than that, you can try debugging the shader by writing out specific intermediate variables from the fragment shader to float32 textures (either using fragment shader outputs or using imageStore if you have OpenGL 4.2 or the ARB_shader_image_load_store extension). Start by writing out the thickness variable.