Compatibility Problems (Permedia and Voodoo2)

I am trying to do some compatibility testing of my app. I picked up some old cards and began testing and I ran into the following problems and was wondering if anyone had any suggestions:

Permedia 3000 (Permedia 1 chipset)
The problem here appears to be with alpha blending. All my transparent polys are showing up opaque. The specs I found for the card say it supports alpha, but that doesnt seem to be the case. From looking around (including the archives here) the best I can come up with is that it supports color-key transparency but not alpha transparency. Can anyone confirm if this is true? Im going to try converting the texture to color-keyed tonightto see what it does. I was wondering about other cards with the same issues. Is there any way to detect if a card supports true alpha transparency (other than hacks like drawing to the frame buffer, reading it back, and checking the parts that should be transparent, which i dont want to do) so that I can have my app do an auto-fallback? I could always get a list of cards that have this problem and hardcode my app to fallback for these specific cases, but Id rather have a generic way of doing this.

Voodoo2
This one is a bit strange. My app loads, and it displays transparency correctly (thats a plus), but it has a different problem. The app seems offset by about 3/4 of the screen. That is, the top 3/4 of the screen are blank, and the top of the app starts 3/4 of the way down and just runs off the screen (almost like the vertical position of the monitor is screwed up). To make it even stranger, the 3dfx logo that shows up when the app first starts up actually displays correctly on the screen (its centered). I found a posting in the archives here (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/000003.html) where someone had the same problem. I just contacted the author of that post, and he said he never resolved the issue. Anyone else ever figure this out?

On another note, does anyone have any better sources of obtaining old 3D cards cheaply other than Ebay? I have a Riva 128 and I740 coming in the mail, so I might be posting again soon (although I sure hope not). But I need to get my hands on some more cards (G200, G400, Voodoo 1/3/5, Kyro, Permedia 2/3, and everything from ATI and S3…I think that covers my list )

Thanks
Ron Frazier

Interesting. I think alot of the older OpenGL boards (possibly the Permedia 1) didn’t properly support all of the blend modes. So alpha might work in certain situations, but not in others. I think these problems might be mentioned in GLsetup.com (if that still exists.)

Sorry, don’t think I can help with the Voodoo2. Perhaps it’s a driver issue. Does it happen with every resolution?

Permedia: Thats a possibility. However, I am just doing GL_SRC_ALPHA : GL_ONE_MINUE_SRC_ALPHA blending (which I though was pretty much universally supported). I will try just doing GL_ONE:GL_ZERO blending and use the alpha test to do an all or nothing accept/reject of each pixel and see if that works better.

Voodoo2: So far, I only tried 640x480x16.

I am just wondering why you want to test on basically dead hardware?

I think your app is good enough if it can be used on either a Rage 128 or TNT2 for the very low end, then Radeon(pre 7500) & GeForce 1&2, then the Radeon 8500 and GF3 &4. That will cover 85% of the mainstream cards. The other cards are either very high end, or very low end and most have crappy drivers that are not supported anymore anyway.

Originally posted by Elixer:
I am just wondering why you want to test on basically dead hardware?

Because not everyone is developing a FPS/RTS/MMORPG for the shelves of CompUSA. I’m working on a shareware title in none of the above categories. A reasonable portion of my target audience are NOT hardcore gamers. They have semi-old systems, and they dont upgrade every 6-18 months.

Example 1: My sister bought a new Gateway less than 3 years ago…Riva 128…she has no reason to upgrade her system as it suits her and her family quite well.

Example 2: My mother-in-law also bought a gateway less than 3 years ago…Banshee…she has no reason to upgrade as it suits her and her family quite well.

Example 3: At my last job, we bought a new Dell just over a year ago…ATI Rage 2C

I could go on. Even today, most budget PCs from Best Buy, CompUSA, and Circuit City are selling with Intel 810 integrated graphics (not too respectable, below TNT class I believe). Its easy to say “just support this card and that card” when you arent working on a real product or when you have name recognition and are one of the big boys and can get people to upgrade just for your game. For myself, its not quite that easy.

Will my users have all of these cards? Probably not. I dont think the permedia was the most popular card. But I cant really say which of these cards they will have. The other issue is that especially in the shareware market, word of mouth is a pretty valuable thing. If a user has problems getting the game working on his voodoo, he probably not going to have many good words to pass on to his friend with the GeForce MX. In such a case, a customer that I excluded from my target audience (based on minimum specs) affected a user I include in my target audience. Now obviously I cant catch everyone, but every card I do catch helps just a little bit.

Here is another perspective. If I can get my app working good on so many old cards, there is a good chance it is going to be more stable even on new hardware. How many times have you seen someone write an app for their GeForce which crashes on half of everyone else’s GeForce.

I got the Voodoo problem fixed on my lunch hour. Turns out I forgot to make the app fullscreen. Stupid me. I knew that.

Anyway, the Permedia problem is still up in the air for now, so I still welcome any advice.

about your chipset problem:

I don’t know very much about the Permedia 1 chipset, but i know for sure that the Permedia2 chipset does not support multiplicative blending and there is no alpha interpolation between the vertices.(only the alpha value of the first vertex in a triangle is used for this trinagle)

and because the Permedia1 is much older than the Permedia 2, i would tend to say: it is very possible, that this chipset does not support alpha blending at all.

good luck with your project.

adrian

Why bother testing old cards?

Originally posted by iNsaNEiVaN:
Why bother testing old cards?

Uhm. Did you even bother reading the replies in this thread? Christ. Some people couldn’t get a clue even if it were to bludgeon them to death.

Originally posted by fenris:
Some people couldn’t get a clue even if it were to bludgeon them to death.

And as much text as I wrote in that reply, that a pretty big bludgeoning object. Seems like we’ve got some hyperactive people around here who read a post and get so excited about putting in their 2 cents they dont even bother to read the rest of the thread. And common, with a name like iNsaNEiVaN, he/she has got to be hyperactive or have ADD. Time to pass out some Ritalin.

On another note, I almost have the Permedia problem sorted out. I just need to work out a few more details and I’ll post again soon.

LordKronos:
Preface: I don’t want to be an ass - but I have to ask …
For targetting people with such low-spec Gfx-cards, wouldn’t it make more sense to just write a 2D game and leave it at that?
Heck, if you want to stay 3D, maybe even a specialized software-rasterizer may be faster than an i810 in hardware ‘acceleration’. I know (I do) that’s an awful lot of work.
I respect your motives and if you can manage to do that, more power to you but for good reasons, I couldn’t be bothered to target anything less than RivaTNT class hardware.

PS: This was written in absence of ability of being helpful, I don’t have access to a Permedia 1 chipset, if I had one, I’d sure try to help

OK, thanks to Won and AdrianD, I got the Permedia issue sorted out. I created a test app that does a bunch of quads using various blend modes, textures, alpha test, vertex colors, etc. Here is what I came up with:

  1. multiplicative blending is not supported. the only values you can pass to glBlendFunc are GL_ONE and GL_ZERO.

2)Alpha is supported in textures. If bilinear filtering is enabled, the alpha is properly interpolated between texels.

3)Alpha test works, so you can say glAlphaFunc(GL_GEQUAL, 0.5f) and only those pixels that are 50% or more opaque will be drawn.

4)vertex color RGB is interpolated between verticies.

5)the texture RGB color does indeed get multiplied by the interpolated vertex RGB color.

  1. vertex Alpha is completely ignored. It isnt interpolated, it is multiplied by the texture like the RBG is, and it isnt used in the alpha test. essentially, vertex alpha does not exist.

  2. when calling glTexImage2D, the third component (# of components) can only be 1,2,3 or 4. GL_RGBA isnt supported like most cards.

I can live with this. It doesnt allow me to do alpha blending, but I can do the alpha test. Thats the most important because I use it for font textures. I can also do additive alpha by using premultiplied-alpha textures. In many circumstances, additive alpha is a close enough approximation for alpha blending, and I think it will work for me.

Now I just have to go back and roll these fixes into my app and see if any other issues show up. I also just got my Intel 740 today. Thanks to everyone that helped.

zeckensack:

no problem, thats a legitimate question. I made my decisions as I did based on the premise that while a good portion of my target market does not have up to date systems, they do at least have a 3D graphics accellerator, even if its only a first generation card.

I have been considering recently whether or not to add a limited function software rasterizer. One issue with this are that it takes a lot of time to write, and it takes a heck of a lot of time to write well. I’ve never written a full 3D renderer before, so I would have a bit of a learning curve there. Also, try doing a web search for writing a software renderer/rasterizer. Not too many relevant hits. I guess I’d have to pick of some more books.

Another issue is that Im not sure a software renderer would be that fast for the scene complexity I am dealing with (about 500-4000 polys per frame). I started by writing a shell of a software renderer which simply takes verticies, transforms them into screen space, clips them, and renders a single pixel for the vertex using direct draw. My frame rates for that were pretty low (high teens or low 20s as I recall) even on a Celeron 266. Granted, I didnt make it totally optimized, but my gut instinct told me that once I added in scan line conversion, texturing, z buffering, and alpha testing on a few polys, I would have a heck of an optimizing task in front of me to make it on such a low end processor. For now, I am going to leave out software rendering. When I get some actual feedback from actual users, if I see enough of a demand and adequate hardware to handle it, then I will pursue it but I suspect it wont happen.

All in all, Im not too bothered so far by the effort required to support these old cards. A small bit of work got the voodoo 2 working and it runs great. I think I’ve almost got the Permedia there. After that, hopefully it will get better.

Just an update. Dont know that anyone is interested, but just in case someone ever searches the archives (yes, it actually does happen) for this subject…

It turns out the Permedia does support at least some multiplicitive blending, but only when a texture object is selected. Or at least it supports GL_SRC_ALPHA:GL_ONE_MINUS_SRC_ALPHA.

Another issue, it seems the permedia drivers have a bug in them (not that Im surprised) with regard to glTexParameteri. As far as everything I read, glTexParameteri changes the parameters for the currently selected texture object, and these values are loaded each time the texture object is bound. As an example, if you bind texture A and set it to use linear filtering, bind texture B and set it to use nearest filtering, then bind texture A and draw a poly it will be draw using linear filtering. This is the way it appears to work for every driver I’ve ever tested, and its how every sample code Ive ever seen has done things. However, the permedia doesnt seem to work this way. It uses the values set by the last call to glTexParameteri. So in the example above, when you draw the poly, you will actually get nearest filtering (since thats the last filtering mode set).

Yet another issue. It seems when the card runs out of texture memory, it returns a GL_OUT_OF_MEMORY. I thought openGL was supposed to handle this for you and dynamically upload and download the textures as necessary. The permedia drivers dont do this. Does anyone know what the spec says for this? Is dynamic uploading a feature only supported by AGP cards? That would explain my confusion on the matter, since this is the first PCI 3D accellerator I’ve ever owned.

LordKronos goto www.glsetup.com there u should find out various issues with various cards

edit over the voodoo shifting issue perhaps u can find info here. also im pretty sure this was discussed (+ resolved) on the opengl gamedevelopers mailing list a long time ago http://www.wischik.com/lu/programmer/wingl.html http://www.pseudonymz.demon.co.uk/

[This message has been edited by zed (edited 04-06-2002).]

LordKronos,

glTexParameteri() is supposed to “stick” with the texture object. It’s a parameter of the texture.

Perhaps the confusion is that glTexEnvi() is not supposed to “stick” with the texture object, as it’s a parameter of the texturing environment (texture ALU, in effect).

Loosely, glTexParameteri() takes TEXTURE_nD as its first argument, and thus “sticks” to a bound texture object. glTexEnvi() takes TEXTURE_ENV as its first argument, and thus doesn’t affect the bound texture object.

Page 145 in the OpenGL 1.3 spec is reasonably clear about this (second paragraph from the top).

Originally posted by jwatte:
glTexParameteri() is supposed to “stick” with the texture object. It’s a parameter of the texture.

Thats what I thought, but it doesnt seem like the permedia drivers obey this. Oh well, no big deal. I have a workaround for it.

Thanks for confirming it for me though. I was pretty sure that was right (either that, or almost everyone is wrong), but I dont actually have a copy of the openGL spec.

After figuring out everything, I now have my app working 100% with the Permedia. Not too bad, it only took me about 6-8 hours to do, and I worked out some other bugs and made the renderer more robust in the process. Just as a reference, the system I’m testing on is a P60 system with a P120 overdrive, and Im getting almost 10FPS on my debug build. Release build will definitely go faster, and I have room to optimize my rendering algorithms too (right now Ive got a lot of unneaded render state changes, Im not doing any type of sorting, only minimal culling, etc). At this speed, the game is somewhat playable. I expect on something like a P200MMX and optimized, it should run great. All in all, not to bad for a shabby little card like this (from what I read, its actually one of the slower cards too). I still havent tested on the Intel I740. Dang thing doesnt have openGL drivers for Win2K, so Im gonna have to tear apart the wifes PC.

Thanks again to everyone

Originally posted by fenris:
Uhm. Did you even bother reading the replies in this thread? Christ. Some people couldn’t get a clue even if it were to bludgeon them to death.

Watch it, fenris - there are some that regard this as a flame, and no matter how stupid the target of your comment is, you will be the one getting the grief.

LordKronos, good luck with your project - I certainly respect you testing it on such low end cards…I know my mother will appreciate your efforts.
What kind of game is it, to have such a wide appeal?

You won’t be able to use dx8.0+ with Permedia2 and their reference 3DLabs driver. Creative(soundblaster co.) has a driver that will work. It’s for their graphic blaster extreeme card which I had for a while until I got gf2. In opengl, this card is weird. If you don’t resize your desktop window to app window size then the buffers will allocate as much space as the size of the desktop even if your’re running in a window of lesser resolution. Then again, I don’t know if all cards do this since I work with d3d. But in d3d, I was able to allocate buffers in windowed mode leaving my desktop res. high and app size low. My permedia2 worked better/faster in d3d than in opengl.

This card has a 15bit depth buffer with 1bit stencil tacked on. It can also do 32bpp color rendering. Chip doesn’t have 32bit z/stencil. I think quake2 ran at 20fps in 640x480(8x6?)res with cyrix 233mhz cpu and 32meg fast page mode ram. Brings back fond memories since I started 3D coding with this card I still use the card in my backup computer, 2D only no 3D work.