Eh? I didn’t give any reason why d3d only works properly with large batches, I merely stated that it, as it stands in the major implementations, is definately cpu intensive. The reasons for this are irrelevant to the developer. I’ve measured the differences between the two API’s with real world data, and d3d is significantly slower. It’s comparable with opengl with larger batches, however.
D3D is trying to be object oriented (Common Object Model), it has a notion of interfaces (d3d, d3ddevice), it has a notion of objects (texture objects, vertex buffer objects, index buffer objects, surface objects etc.etc). So, it works in an object-oriented way as it stands, so why so many enums? Every single state is an enum, every single state has just one parameter, which is a 32bit integer…things like fog range values have to be macro-magic’d into 32bit integers in order to fit in with this grand design. What the hell is wrong with having a function for every state? It’s not like there’s hundreds of states. Then you can be type-safe, then you don’t need to look up in a table for the attributes of that particular object/state. If you’ve got the freedom to re-invent your entire interface with every revision, then it absolutely should be completely object-oriented, because we’re dealing with objects all the way down the (pipe)line.
Why, specifically, do you believe object-oriented design doesn’t belong in a low level API?
Low level doesn’t necessarily mean straight C, you know.