3D Textures

Basically, my problem appears to be with the libraries. Since 3D textures are not supported under OpenGL 1.1 (I believe that’s the release that MSVC++ 6.0 compiles to…), I can’t seem to use them at all.

Once I get can get 3D textures to work at all, I’m going to attempt to display what is essentially a 3D probability density function, with the texture applied to multiple quads. I really don’t expect any problems with this, once I get over the intial library/include file hurdles.

Currently I’m attempting to use the include files that come with nVidia’s OpenGL SDK. Here are some snippets of pertinent code that I’m attempting to use right now:

#include <stdlib.h>
#include <math.h>
#include <windows.h>
#include <GL/glut.h>
#include <GL/glext.h>

#define GLH_EXT_SINGLE_FILE  //I copied this stuff from one of the nVidia demos.
#include <glh_extensions.h>
#include <glh_linear.h>
#include <nvparse.h>

using namespace glh;

//lots of code...but this is the pertinent portion:

	//Load texture
	glGenTextures(1, &texid);
	glBindTexture(GL_TEXTURE_3D, texid);


	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

	glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE, size, size, size, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, img);

Again, the latter part was essentially copied from an nVidia demo (procedural 3D textures). The program fails at the glTexImage3D function call. With the above settings (I have used different ones…with different errors), I get an error "referenced memory at “0x00000000"”. If I instead use include files more like:

#include <stdlib.h>
#include <math.h>
#include <windows.h>

#include <glh_extensions.h>

…and add opengl32.lib (along with any or all of the lib’s with the NVSDK) to the list of library files to use, I get an unresolved external error on the glTexImage3D function call. Notice that I also cut out a bunch of include files…I don’t think I use most of 'em at all.

Now, I figure since the code at top was copied (nearly) directly from the proctex3d demo at nVidia’s site, you’d think it would work, as the code in that project compiles and runs just fine (And yes, I checked to make sure the invalid address wasn’t the pointer img).

Any ideas as to what they could be doing in the nVidia demo that I’m not doing?

3d texturing is an extension, the 0x000000 error indicates you are writing to a null pointer, most likely the glTexImage3d function since it doesn’t exist. You will need to use some form of extension loading to get access to 3d textures, search the net for more info on loading extensions

Thanks, I found out that all I needed to do was call wglGetProcAddress to get it to work (Using the URL here ).

Now my problem deals with getting it to transform. I imagine the texture maps between coordinates (0,0,0) and (1,1,1). Now I just need to find a way to properly-rotate the texture. Anybody know offhand all of the details of implementing a texture matrix? I figure it should be relatively easy…but I’m not yet getting good results.

[This message has been edited by Chalnoth (edited 11-03-2002).]

OpenGL provides a texture matrix. Call glMatrixMode with GL_TEXTURE and you can modify the matrix.

Well, I got that far, but I can’t seem to rotate the texture (I got it to translate and scale…).

That is, I want to display the texture across 16 quads that are all aligned parallel to the screen, and rotate the texture around to create a 3D effect. Here’s my current code:

	glScaled(0.5, 0.5, 0.5);
	glTranslated(-1.0, -1.0, -1.0);
	glRotated(1.0, 0.0, 0.0, alpha);
	glRotated(0.0, 1.0, 0.0, beta);
	glRotated(0.0, 0.0, 1.0, gamma);

With alpha, beta, gamma all being angles.

Now, I had assumed that the texture matrix is multiplied by the position coordinate of the current pixel being drawn to the screen. Is this correct? If so, what could be wrong with the above? Btw, I’m using idenity matrices for both modelview and projection here, so my quads vary from (-1,-1,-1) to (1,1,1).

[This message has been edited by Chalnoth (edited 11-03-2002).]

Hi, your calls to glRotated have the parameters the wrong way around.
Should be:


Old GLman

Haha! Oh, crud, it had to be something that simple, eh? Thanks!

Alright, now I have a different problem.

It’s looking like my probability distribution (in this case, I’m displaying a P-orbital for a hydrogen atom) still isn’t displaying entirely correctly.

That is, each slice that I take looks identical. This should definitely not be the case. When viewed top-down, the P-orbital should look almost spherical (It’s sort of like two dumbells when viewed from the side). Instead, it just disappears entirely, which should only happen for the plane in the very center. Any other ideas? For reference, here’s my drawing code:

    glBindTexture(GL_TEXTURE_3D, texid);



glScaled(0.5, 0.5, 0.5);
glTranslated(-1.0, -1.0, -1.0);
glRotated(alpha, 1.0, 0.0, 0.0);
glRotated(beta, 0.0, 1.0, 0.0);
glRotated(gamma, 0.0, 0.0, 1.0);

glBlendFunc(GL_ONE, GL_ONE);
double d;
for (d = 1.0; d > -1.0; d -= 0.25)
glVertex3d(-1.0, 1.0, d);
glVertex3d(1.0, 1.0, d);
glVertex3d(1.0, -1.0, d);
glVertex3d(-1.0, -1.0, d);

[This message has been edited by Chalnoth (edited 11-04-2002).]

Well, I figured it out

I went through the OpenGL specs and did the matrix multiplication myself. As it turns out, it’s a driver bug or hardware limitation. When generating the texture coordinates myself, I get exactly what I want. When generating them with the texture matrix, something gets screwed up with the z-coordinate.

So, unless somebody else has a different explaination, I most definitely found a driver bug/hardware limitation.

(fyi: GeForce4 Ti 4200, 41.03 drivers)

[This message has been edited by Chalnoth (edited 11-04-2002).]