GF4 Go = no VP or VBO?

Our Dell laptop with a geforce4 440 go doesn’t seem to support ARB_VP or VBO. I installed the latest nVidia drivers but the extensions are just not there.

These extensions should be supported by the the gf4 go, right? Is it likely I’m just having trouble installing the drivers?

I noticed Tom’s hardware registry doesn’t include hardware for mobile computers…

The card doesn’t support ARB_vertex_program in hardware, but I’m surprised that it doesn’t provide a software fallback (though I’m no expert on Window drivers).

I can see no reason why it wouldn’t support ARB_vertex_buffer_object…

I could be wrong, but IIRC nvidia doesn’t support mobile gpu’s in their drivers. You need official dell drivers, which are probably way behind the current reference drivers provided by nv.

Ah, that’s probably it! Silly that this isn’t mentioned anywhere obvious. Ruined a presentation it did.

Thanks

If TNT2 at work shows ARB_vbo, I don’t see any reason GF4 shouldn’t have one

The Dells are up to 42.xx if I remember correctly. They were languishing at 28.xx for a YEAR which was kind-of painful.

Anyway, if you use ARB_vertex_program on a GeForce4 Go, then you’ll effectively turn off the hardware transform, and do it all on the CPU. Except for the 4 Go 4200, which is a “real” GeForce 4.

Originally posted by Nutty:
I could be wrong, but IIRC nvidia doesn’t support mobile gpu’s in their drivers. You need official dell drivers, which are probably way behind the current reference drivers provided by nv.

actually you are right, but the only difference between the nividia reference drivers and the dell drivers is in the .inf file. Dell adds resolution settings for the laptop-displays (ie.1400x1050 on my inspirion…)
so if you want to use the newest detonators with your dell system, all you need to do is to add a few lines to the inf file, and that’s all.
fortunately, there are some guys out there, which have nothing else to do than to watch for new detonators and to modifiy the inf files as fast as they can.

so all you need to do is to download the newest drivers from nvidia, and then to get the inf file from here:
http://www.geocities.com/madtoast/

it works! (since 2 years at least. i actually never used the reference drivers)

and this is the forum where most of the modified-inf-sites are announced.(it’s also very usefull, when you have trouble with your installation…)
http://forums.us.dell.com/supportforums/board?board.id=insp_video

Nvidia’s Linux drivers definitely support the geforce go chips - they have their own section in the readme…And the Geforce Go 4200 support includes ARB_VBO and ARB_VP.

No need for the Dell drivers if you run with the penguin…

Happy Coding
:wink:

Originally posted by jwatte:
Anyway, if you use ARB_vertex_program on a GeForce4 Go, then you’ll effectively turn off the hardware transform, and do it all on the CPU. Except for the 4 Go 4200, which is a “real” GeForce 4.

I thought software vertex programs were supposed to be fast?!? It’s excruciatingly slow. My equivalent software implementation runs almost a 100 times faster, buffer copies and all included.
I hope that’s not the case for all software implemtation or they should really have never supported it. It’s almost as bad as ARB_FP emulation!

Madoc,
when you are using software emulated vp’s make sure not to store your vertexarrays in videomemory(or AGP) because it is uncached memory and the softwareemulation needs to access it.

Yeah, I realised this later on. I’m pretty new to vertex programs, handn’t used them on something which doesn’t support them in HW.

How do people deal with this? AFAIK there is no convenient way of determining whether vertex programs are supported in HW.

As far as I remember VBO’s automatically take care of mem selection, if you have GF2 or TNT +vp data is kept off-card. I guess that’s not the case with VAR.

AdrianD and Madoc, Madman is correct, but there was a bug in previous drivers that had that problem, but now i think thats fixed and works like it should.

Well, this was using the very latest drivers (45.23, is it? just downloaded from NVIDIA anyway) and the performance was absolutely atrocious, same with NVVP 1.0.
Not resolved for a GF4 440 Go? Haven’t had a chance to test it since.
Seems unlikely that such bad performance could simply come from software VP though.

I guess there really isn’t an intelligent way to tell whether VPs are running in hw then?
These VPs and FPs really are a pain in the arse. Still not time to make a move to them?

Originally posted by Madoc:
I guess there really isn’t an intelligent way to tell whether VPs are running in hw then?
These VPs and FPs really are a pain in the arse. Still not time to make a move to them?

yep. if you want to include vp/fp support in your app, you have to write at least 2 codepaths: one with vp/fp support, and another for standard opengl, just because there only a few users out there with true vp/fp support in hardware. so solution my for this problem is just to check for the presence of vp/fp’s and let the user choose his preference.

As with most ( or all ) opengl things you should test it for speed before you use it ( itf youre making a serious application anyway) since there is nothing that tells you whatever anything is in hw or sw, or if the sw path is as fast as you need.

I used to do that, benchmark things on startup. It’s just not what I consider convenient. Eventually I ended up simplifying it all by reducing to software paths and making assumptions because it was so annoying and I was working for dedicated systems a lot.
Another alternative is to have a little indipendant benchmarking prog that configures an ini file but it would need to run every time the hardware / software config changes and who’s going to do that?

Anyway, thanks for your help everyone.

My preliminary conclusion is as follows:
Include a “try vertex shaders” switch in the user interface. This will initially set an internal use_vs flag and an internal vs_are_fast_enough flag. Also set a monitor_vs_fps flag to true.

Monitor frame rates (needed for animation work anyway).

if (monitor_vs_fps&&use_vs)
{
If fps are consistently horrible (<20) over the first fifteen frames rendered && try_vs==true, record an average and deactivate vertex shader usage (fps_with_vs=avg; use_vs=false).
}
if (monitor_vs_fps&& !use_vs)
{
Monitor the next fifteen frames. If the average fps is more than double fps_with_vs, vs_are_fast_enough=false.
If there’s no tangible difference, you can also turn vs back on.

monitor_vs_fps=false
}

When the user later checks back with the config settings, display the use_vs flag as the current setting and restart the procedure if vs get turned on.

The problem with that scheme is that you need to be able to switch between vs and fixed function on the fly. That may be a bit tricky, but it can be done.
You also need a good test case that doesn’t produce wild fps fluctuations over the monitored frames. And it shouldn’t be too simple (such as a GUI), it should be a ‘real’ scene with representative vertex load.

Ho humm …