Hi, I write a program to render some volume data. In my program, I need to update data in each frame because data is constantly changing over time. the data is about 19M. On windows, it works well, while on Linux, I found it is about 10ms slower than that on Windows. I add log and found that, after calling glTexImage3D, the first call to glBindTexture becomes extremely slow, which costs about 10ms. This is not normal. My graphic card is RTX 2060 Super. Graphic card driver version is 525.125.06. Is any one has the same problem? How cound I solve this problem?
Is it slow on every frame you bind it, or is it only slow the first time? If it’s just the first time, I suspect that the driver is waiting to do all of the hard stuff until you bind it.
It is just slow at the first time. your reply sounds reasonable. But I don’t understand why this happens only on Linux. On Windows glBindTexture does not take time at the same situation.
You don’t know that a GL texture has been allocated space on the GPU, texels uploaded to the GPU, and placed in a ready-to-render state until you actually pre-render with it at least once.
That said, the driver could perform any or all of this work before that point, including in the background (asynchronously), or synchronously via implicit sync, during any GL call or calls it so chooses, and on any thread. A command queue flush to the driver could yield this behavior. Though 10 msec is a very long time.
Talk about that
glTexImage3D you’re calling immediately beforehand.
- If you remove this, does the
glBindTexturestall go away?
- If you do a
glBindTexture, does the
glBindTexturestall go away.
- What texture MIP resolution are you allocating in the
- What is the internal texel format?
- Do you have a PBO bound at the time (
- If not, are you providing a non-NULL CPU texels pointer?
- Are you providing a NULL CPU texels pointer?
And for the
- What type of texture are you binding?
- Is it pre-existing or an unused texture handle?
- If pre-existing, what resolution and internal format is it?