No, I can’t think of any way to do this in OpenGL. And there are very very good reasons why you wouldn’t want to.
General rule : using ANY type of feedback mechanism in a realtime app can absolutely murder performance on hardware-accelerated systems.
Most if not all hardware drivers are pipelined. Many are very heavily pipelined; there may be a latency of entire frames between issuing a GL command and actually seeing the results on your screen.
So lets look at what happens if your idea was feasible. Suppose your driver has a 2-frame pipeline latency, and you’re currently drawing frame 3 of an animation. You issue the command to draw a portal quad, and you want to know whether any pixels were affected.
Unfortunately, the actual hardware is still rendering frame 1. Before it can give you an answer, it has to finish rendering frame 1, render the whole of frame 2, and render everything you’ve issued so far for frame 3. Meanwhile your app is just sitting there twiddling its thumbs; it can’t do anything else until you get the answer.
Then you get the answer, make your decision, and carry on issuing commands for frame 3.
But now all those commands have to go through the entire rendering pipeline - a pipeline that’s supposed to take about 2 frames, remember - and until they do your hardware is sitting around twiddling its thumbs with nothing to do.
End result: in this case, your “optimization” test could theoretically wind up taking as much time as 4 full frames every time you called it.
Pipeline flushes are BAD, and most feedback functions will cause them. Avoid them like the plague.