Giggles, “Point 2 of this post”, that post was mine.
At any rate, I have seen the binary shader thing thrown around ALOT. What goes through my head is the idea of a binary blob as a hint to be used optionally… now the horrible icky things soon come into play: deployment. Chances are one will need a binary blob for each major generation for each major GPU architecture. On PC, that means right now means 6 blobs (GL2-cards, GL3-cards, GL4-cards)x(nVidia or AMD). [I don’t event consider Intel anymore at this point]. But the plot thickens, OS and driver version.
One can go for this: application does not ship those blobs but rather makes them at first run and then re-uses them so we get things like:
glHintedCompileShader(const char *GLSLcode, int blob_length, GLubyte *binary_blob);
and maybe a query or something:
and lastly glGetBinaryBlob.
it is feasible to do, and in most cases only the first run of the application will get a “full” compile… the sticky bit of the idea above is that it assumes that the “binary blob” does not depend at all on GL state (who knows what the driver does as some GL state changes, see my comments on Tegra)…
But the idea of shipping static binary blobs to cut down on startup time I don’t see be so feasible with constantly evolving hardware and drivers… it is kind of feasible in the embedded world, but only with an incredible amount of care.