Features vs. Extensions: where does the line get drawn?

Vulkan has the built-in concept of features: parts of the standard which a particular Vulkan implementation doesn’t have to implement. Some of these features amount to little more than “will a particular maximum value be more than 1”, such as samplerAnisotropy. Other features are more substantial, like geometry and tessellation shader stages or being able to perform image load/store in vertex processing shaders (the latter OpenGL handled with mere constants).

Sparse resources are probably the most substantial optional feature in Vulkan, which requires code to do very different things based on the presence or absence of this functionality. Not to mention that Vulkan dedicates an entire chapter of its specification to sparse resources, which is an entirely optional construct.

My question is this: why are things so complex considered optional features instead of extensions?

What is the reasoning behind making some things features and other things extensions? Granted, there aren’t many extensions yet, but it doesn’t seem clear why some things are considered features.

I am not Khronos, but I would assume the same rationale as for OGL would apply. That is the progression (some sort of Peter principle :slight_smile: ) experimental/proprietary solution -> registered vendor extension -> ARB/KHR extension -> core functionality. I would also assume inherent platform dependence (or dependence of extension on such extension) would disqualify functionality from being core (as in the Surface extensions). Or being inherently too extensible (as in the Debug extension being primarily to be extended by the layers). Also perhaps for Reasons (like the eternal EXT anisotropic filtering extension).