there’s a computation that i’ve been trying to figure out how to perform on graphics cards. the problem is that we need to avoid texture lookups because of their expense – our code needs to be really fast (it’s not graphics related).
the issue is that i have a resonably large amount of data that changes too much to be passed as uniform inputs. i can not send varying parameters to a fragment shader – as i understand, the only varying parameters allowed must be interpolated.
my idea is this. is it feasible to run both a vertex shader and a pixel (fragment) shader at the same time such that there is one vertex exactly aligned with each pixel? i.e. have a one-to-one correspondence between vertex and fragment kernels. i know i can pass arbitrary arrays of data into vertex shaders (for example, via cgGLSetParameterPointer), and that the vertex shader can then output (into texture coordinates) available to fragment shaders.
my thinking is that this will allow me to submit varying data to fragment shaders. will this work the way i’m thinking it will?
does my avoiding actual texture lookups in this fashion save me the amount of time (a lot) i’m thinking it would?
it seems to me the answer is yes… but i don’t see sample code doing this where it would (seems to me) work well.