OpenGL data types

my understanding is no current consumer cards support double thus u gain absolutly no precision by using them, but lose performance/memory.
its like asking for a RGBA16 texture (and the card supports RGBA8 max) in which case it ignores the 16 + acts like u asked for a RGBA8 texture