Ok, I had a bit time to read/think through the specs and I think, I might have grasped it 
First of all, I think its a step in the right direction. It offers new features, even D3D10 has not. Issuing draw commands will get faster, leaving more CPU time for the appliaction itself. Although the GPU will not render faster per-se, there are now more possibilities for complex datastructures used in the shaders.
I have a few points to criticise, though. First of all the naming of the extensions is… just awkward.
Why not use “NV_shader_memory_access” (its about shaders directly accessing memory, right?) and “NV_buffer_direct_memory” (I don’t get the “unified” part…)
I think it might be good to not provide the non-Named Functions. We all know that bind-to-edit will die out some day, so why provide these obsolete semantics for brand new functionality?! Just provide the Named-Function semantics (but please, without the “Named” prefix).
Also, the specs refer to some functions of EXT_direct_state access without mentioning dependencies on it.
Now we get pointers and pointer arithmetic in shaders. That allows to create really complex data structures, like lists, trees, hashmaps - the specs even talk about whole scenegraphs. But how are we supposed to debug such monsters?
Is the driver using the “sizeiptr length” parameter of BufferAddressRangeNV to determine that a certain region of memory is in use by drawing commands? If not, do we have to use fences for that (welcome back, NV_vertex_array_range)?
It seems, that once a VBO is made resident, it never can become non-resident (unless the application tells it to be). How can GL make that guarantee? What is the use of a non-resident buffer then? Does a resident buffer have any performance penalties?
I like the separation of vertex attribute format, enable and pointer. VAO did not offer this and therefore was useless (for me). You could it one step further and provide “vertex format” objects which encapsulate the format and enables. I don’t know if that would be a great benefit, though (switching a stream setup with one command).
I’d like to suggest to move all the buffer-object related functions (MakeBufferResidentNV, IsBufferResidentNV, MakeBufferNonResidentNV) from NV_shader_buffer_load into NV_vertex_buffer_unified_memory. They feel more natural there. Additionally, IsBufferResidentNV may be replaced or accompanied by a new parameter to glGetBufferParameteriv().
Last but not least, IMHO the use of these new extensions probably has a fair impact on the design of a renderer. It cannot be used optional easily. Therefore, I will probably hesitate to actually make use of these features, if they are not available on ATI as well.
my 2c 