What is best way about 'fastest texture'?

In our project, we use so many texture and upload texture use so many time.

I see ‘using compressed texture’ reduce upload time, but i cant find how to do it.

many blog and page are too olds. 5~6 years ago, i think it’s not perfect in now…

i try to use ASTC (HDR, LDR) format, but it’s 10 times slower than not-compressed texture.
read jpeg file and lower func use 300ms.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);

but use lower func, 3000ms.

	glCompressedTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_ASTC_12x12_KHR, xsize, ysize, 0, size, buffer);

What’s wrong? compressed image is so small. why small data transfer use more time? i can’t understand…
glCompressedTexImage2D is slow function?
or, nvidia GPU (1080ti) is not work with ASTC format?

please help me!

I think the issue is that the GPU (GTX 1080 Ti) doesn’t natively support ASTC, HDR or LDR.

Go here on gpuinfo.org:

This lists the OpenGL driver reports that say they have GL_KHR_texture_compression_astc_hdr support.

Under “Renderer”, type in 1080, GTX, or NVIDIA. What do you see? This doesn’t prove that this GPU can’t support this extension, but it does say that there are no OpenGL driver reports in the database that show it supporting it.

Now do the same test with the ASTC LDR extension:

If you try the same search strings, you’ll find that the only matches in the NVIDIA GPU line are Tegra X1 and X2 (mobile) GPUs.

That said, you don’t have to trust the posted driver reports. Feel free to Download the OpenGL Hardware Capability Viewer, run it on your GTX 1080 Ti, and verify that your GPU+driver doesn’t support ASTC.

Unfortunately, there isn’t much (if any) native support for ASTC on desktop GPUs at all. I wish there were. This would make it easier to deploy assets across both mobile and desktop GPUs without having to republish them.

Another possible explanation is that your app is running on top of an OpenGL ES emulator library (such as PVRVFrame, ANGLE, Mesa3D, etc.) which is dynamically mapping OpenGL ES calls (like references to ASTC compressed textures and data) to native OpenGL or Direct3D calls exposed in the native (GTX 1080Ti native) graphics libraries and driver. In this emulation layer it’s dynamically decompressing the texture (probably on the CPU) and either uploading it to the GPU in an uncompressed format, or dynamically recompressing it in a GPU texture compression format natively supported by the GPU and uploading that (though my bets are on the former). I’ve seen this slowdown you’re talking about before running on PVRVFrame+NVIDIA desktop GPU and uploading PVRTC (a vendor- and mobile-GPU-specific compressed texture format). Long story short: profiling your mobile app when running on desktop on top of one of these emulation layers is fraught with problems. Not the least of which is the GPU architecture is totally different, and things that run efficiently on desktop can perform horribly on a mobile GPU (or vice versa, in the case I suspect you’re looking at now).

1 Like

oh… thanks. you reduced my pointless attempts.
than do you know ‘supported’ compressed texture format in Nvidia 1080Ti or 2080Ti?
or another way to fast upload texture?

i think uncompressed texture is so slow. what can i do?

i see that site, open ‘Nvidia GTX 1080 ti’ page.
i see ‘extension’ not include astc format, but ‘compressed format’ include it.
i think extension is hardware support, ‘compressed format’ is software support(can render, not fast). right?

The ones that have been supported for a long time now, the standard desktop compressed formats: S3TC, RGTC, and the slightly newer BPTC. The last two are core OpenGL and have been for quite some time, while the first one has been supported on desktop hardware for decades.

It seems to me that you need to separate texture upload performance from how you load the texture. 300ms for a single image’s upload is pretty steep unless the image is gigantic or something. However, you said “read jpeg file”, and the reading and processing of JPEG is not cheap (relatively speaking). I could definitely see that taking the lion’s share of that 300ms.

So you first need to see if your performance issues are due to the OpenGL side of the process or are due to the time it takes to get the data to the point where you’re going to make an OpenGL call. Because OpenGL can’t fix the latter.

Honestly, I’d forgotten that implementations even specified which compressed formats they work with. I probably forgot about this because those queries are deprecated and therefore shouldn’t be used. I don’t even know why the database queries that stuff.

So yes, use what the extensions/GL versions say, not by what that query says.

1 Like

Here you go:


S3TC is the main one, and it’s been around forever. This is what folks generally refer to when they say “DXT” texture compresssion. LATC and RGTC IIRC are just flavors of DXT5’s alpha channel handling used for 1- and 2- channel textures. Also old and been around forever. VTC IIRC is for volumetric (3D) texture compression. BPTC was added later (7 years ago), so many GPUs out there will support this as well.

But don’t trust me. Grab visualinfo, glewinfo, the OpenGL Extensions Viewer I linked to above, or whatever, and check the extensions exposed by your driver.

For details on one, check it out in the OpenGL Extension Registry, the wiki pages Alfonse linked to, or one of the many, many pages and presentations on the net describing these.

To verify internal format support on a particular GPU + driver, see the bottom of this post:

1 Like

I’ll try that 3 format. thanks.

it’ my mistake, 300ms is time for load 20 images.
i try to check real time, just upload time (except load jpeg and convert) is 120ms. your right.

I try S3TC, get 8 times faster than uncompressed format.
i’ll do to another 2 format. thanks!