You should ideally let OpenGL perform the conversion.
For reading sRGB-encoded textures (or any gamma-corrected texture, which will be much closer to sRGB than to linear), telling OpenGL that the texture is sRGB will result in filtering being performed correctly; interpolating then converting manually will be incorrect. This shouldn’t require any support from Qt, just using GL_SRGB8 or GL_SRGB8_ALPHA8 (rather than GL_RGB8 or GL_RGBA8) as the texture’s format. Manually converting the result of e.g. texture2D() is only correct when using nearest-neighbour sampling (no interpolation).
Conversion from linear intensity to sRGB-encoded value on output may require support from Qt if you perform the conversion when writing to the default framebuffer. This can safely be done manually.
No, but it’s close enough unless you’re dealing with values very close to black. And it may in fact be correct for some textures; sRGB was a minor adjustment to existing practice, so some “sRGB” textures may in fact just be intensity1/2.2 rather than actual sRGB. The main reason for sRGB’s adjustment is that an exponent of less than one results in the graph having a vertical slope at zero, meaning that an encode-decode cycle loses accuracy for values very close to zero.