GLSL texture index

hi,

im trying to access pixels of a texture on GLSL. i have a single index derived from depth, with, and height. since the texture was kept as a single index array, i convert those pixel to:

depthtextureWidthtextureHeight + height*textureWidth + width

the problem is after googling around, i found out that the way to access texture coordinate in GLSL is through decimal, since the texture is kept from 0-1. So for example if i want to access the next pixel of a 2D texture it would be

width + (1/textureWidth)

now i havent tried it yet but will it cause a decimal rounding error when processed? Can I use a discreet coordinate instead? For example if i want to access the next pixel it would be (width + 1)? My program will be very sensitive about this, means a different data in the next pixel may cause a totally different result. is there anyway to do a discreet index on GLSL?

thanks in advance

since the texture was kept as a single index array

What do you mean by that? Textures in OpenGL each have a texture type. 1D, 2D, 3D are all texture types.

If you created the texture with glTexImage3d, then it is a 3D texture. You access it with a 3D texture coordinate. You do not need to manually convert the vec3 into a “single index array”; OpenGL does that for you.

the problem is after googling around, i found out that the way to access texture coordinate in GLSL is through decimal, since the texture is kept from 0-1.

now i havent tried it yet but will it cause a decimal rounding error when processed?

No. You have plenty of room within a texel, as long as you’re using nearest filtering. And even then, you’ll likely get the right answer.

Can I use a discreet coordinate instead?

If you’re using GLSL 1.30 or above, you may use the texelFetch function.

hmm ok here is my problem.

i have a massive array that needs to be sent and processsed in the GLSL. i’ve googled around and kinda hard to find a way to send an array to GLSL with a massive ammount (ussualy just 2 or 3 or 4 values). So i decided to put my array as a texture and access it through my own singleIndex.

But Im not familiar with GLSL. is there a better idea to solve my problem? can I send a massive array (millions objects) to GLSL?

thanks in advance

i have a massive array that needs to be sent and processsed in the GLSL.

How do you intend to process it? Is this something where you loop over every X elements and perform some action, or do you need to random access them?

Though really, if you’re just processing arbitrary data, I’d suggest OpenCL.

yeah im considering to switch but i think not this time. the learning curve probably takes too long.

yes its gonna be a loop over x element, with a guided index. im hinted somewhere else that its imposible to do a massive array in GLSL, so i guess i’ll keep it as a texture and loop through it (with width, height, depth rules). But im curious since the indexing for pixel in GLSL use decimal value. im kinda afraid the rounding will cause an error. i guess im gonna try this first. any other advice would be helpful.

yes its gonna be a loop over x element, with a guided index.

What I meant was whether the computation for each element was independent of the other. Because OpenGL does a lot of that. Like in vertex shaders. Where each vertex’s attributes computes output data independently from each other.

In short: can you put your arrays in vertex attributes and just run a vertex shader over them?

But im curious since the indexing for pixel in GLSL use decimal value. im kinda afraid the rounding will cause an error.

I addressed this point already.

hi,

i think i’ll just post my intention. so what i want to do is basically saving a sequence of value per pixel. so imagine if you have a 100x100 texture. each pixel of this texture have a sequence of value, let say pixel 1 has values 0.1, 0.4, 0.6. now the sequence size for each pixel is different, eg. if pixel 1 has 4 values, pixel 2 may have 10 values. so i dont think im going to do this with 3D texture, lots of redundancy if i want to add up the short sequence to fit the longest one.

so i thought i’ll just create a very long array. if pixel 1 start from array index 0, pixel 2 start from array index 10, pixel 3 start from array index 13, etc. and i kept the index by creating another 100*100 texture, and the RGB(or whatever) value represent the starting point of pixels sequence.

when i try to render the image, i simply take the RGB index, convert it to singleIndex, and put it as my loop starting point, to look at the value that fit to be shown. Thats why im curious if I can do a singleIndex based on integer, so my calculation can be precise without rounding. Any hint/info/idea?

thanks in advance

i think i’ll just post my intention. so what i want to do is basically saving a sequence of value per pixel. so imagine if you have a 100x100 texture. each pixel of this texture have a sequence of value, let say pixel 1 has values 0.1, 0.4, 0.6. now the sequence size for each pixel is different, eg. if pixel 1 has 4 values, pixel 2 may have 10 values. so i dont think im going to do this with 3D texture, lots of redundancy if i want to add up the short sequence to fit the longest one.

Are you trying to write this from a shader or read it in a shader? Writing is really something you need OpenCL for. Reading is possible, though it depends on how you know where your data is.

Given a particular pixel, and the data stored in a linear array, how can you calculate where this pixel’s data begins and where it ends? Can you calculate it at all, or do you have to keep a running total as you loop over each “sequence”?

Thats why im curious if I can do a singleIndex based on integer, so my calculation can be precise without rounding. Any hint/info/idea?

Um, what part of “If you’re using GLSL 1.30 or above, you may use the texelFetch function,” did you not understand?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.