Not through OpenGL support, no. OpenGL doesn’t require YCrCb / YUV texture support (e.g. YUV422, YUV444, etc.)
However, as a first cut you could easily just add one or two simple paths to your code to render these using standard OpenGL on pretty much any GPU. There are at least two options for this that don’t require any extension “special sauce”:
On the CPU, convert the YUV frames to an RGBA texture format natively supported by the GPU, subload those RGBA texels to the GPU, and render that. This is where the trivial chroma re-sample to 1x would occur (and re-interleaving if planar). OR…
Write your frag shader do sep lookups for Y, U, V, and re-interleave + convert to RGB on-the-fly during rendering.
This would provide you a good base case for performance, from which you can determine if you need anything more. If your camera is lower res and/or slower frame rate, or if your frame rate demands are low, then this might be sufficient.
That said, it’s worth checking the GPU you’re targeting to see if it has native support for rendering compressed/encoded video and/or YUV frames. It probably does, since video is pretty ubiquitous nowadays, many GPUs offer hardware-accelerated video playback support through dedicated video decode/encode hardware on the GPU. Also, some GPU vendors offer special APIs or SDKs to make direct use of this hardware from applications. See what your options are there.
As far as Vulkan support (which you didn’t ask about), here’s a starter link. Search for “video” in the SIGGRAPH 2021 Khronos Vulkan BOF Presentation here: