Low fps even on empty canvas.

aho
Thank you for detailed response!

Just wanted to add another observation: I’ve found that using setInterval in my rendering code seems to slow it down artificially. For example: If I use a setInterval with a delay of 10 I should theoretically be limited to 100FPS, but on my projects I’ll typically only get 45 - 50. If I change up the event that triggers a draw, though (like using Vladimir’s postMessage hack) I can get much higher framerates.

Using that method for my Quake 3 demo, which is rendered at 854x480, I can typically get ~120FPS in Chrome 6, which isn’t too shabby for a fully rendered scene like that!

I don’t know that this method is appropriate for all WebGL uses, but it seems to at least get you get a better idea of your maximum potential frame rate.

It’s a shame WebGL is dragging it’s heals a bit on this issue.

“Tight integration with HTML content, including layered compositing, interaction with other HTML elements[…]”

This isn’t what 3D content needs at all. What we need is a full screen hardware accelerated canvas, otherwise the battle is lost before it begins.

I agree. hopefully they will address this and add a flag for bypassing the compositing.

I disagree. Having actually produced a full 3D game using this technology rather than just talking about it in the abstract (see http://tubagames.net), I know that using HTML5 for menus and such like is extremely useful.

The issue is certainly with compositing - and the difficulty is that right now, not all browsers are using OpenGL or Direct3D for compositing - and that’s an obvious issue at higher screen resolutions. However, it’s getting fixed - albeit sporadically. Some browsers on some hardware and under some OS’s have it (and I get 30Hz frame rates for a pretty complex scene) - and others don’t (and I get a miserable 8 to 10Hz - even on pretty good hardware).

It’s getting better.

While full-screen has its’ uses - I doubt that more than a small percentage of 3D content (note that content!=games in many cases) will want to use it.

The market these days is in casual games - and casual gamers don’t want full-screen - they want to be able to tweet their buddies, surf the net, play the game…and occasionally do the work that their bosses are paying them to do…all at the same time.

Ive gotta agree with Steve here.
Whilst I agree framerate is pretty poor ATM (it will improve though(*)), being able to integrate html stuff with webgl stuff is a point of difference with standard 3dapi implementations eg opengl native apps, otherwise youre better of writing a native app

Personally though I could live without the html5 integration. But I wouldnt like to see it removed

(*)If youre using chrome make sure youre doing
–enable-accelerated-compositing

Also I believe once webgl is mainstream (next month perhaps) things will get faster very quick

ATM(*) the last time I checked chrome renders webgl over twice the speed of firefox, now when websites start benchmarking webgl applications and browser X is performing terrible, the makers are gonna be so embarrassed with the benchmark results they’ll do something about it.
Look at javascript performance over the last ~year.
chrome comes out + is 10x faster than the others, forcing all the other browsers to up their gameplay, I predict exactly the same senerio with webGL, web sites love benchmarks.
This will be a new arms race

(*) well chromium doesnt run webgl at all for me ATM so has a FPS of zero :lol:

The problem with benchmarking right now is that the performance depends rather drastically on the application.

  • If your application is CPU-bound then JavaScript performance will be the determining factor.
  • If you render at high resolution then the efficiency of the compositor system becomes paramount.
  • If you make a ton of GL calls - then the efficiency of the WebGL implementation itself becomes critical.
  • If your platform choice requires that the system use Direct3D to emulate OpenGLES - then you’re going through ANGLE - and the performance of that layer becomes critical - and comparing it to a browser that can use OpenGL when available is exceedingly problematic.
  • If one implementation offers some extension that the other doesn’t - then even if it’s worse at all of the previous things - it may turn out to be faster running your application simply because the extension offers a more efficient way to do something in your niche situation.
  • …and no matter what, different GPU and CPU hardware choices will alter the balance between those things - so even if you pick a “fair” benchmark to run, the results may vary widely between users.

Characterizing the performance of graphics systems is an exceedingly tricky matter.

The problem will be that (in all likelyhood) each browser will lay claim to being fastest in one or other of those areas and thereby demand the title “Fastest WebGL”…and with some justification for that subset of applications that uses it!

This is an interesting blogpost on this topic. It is mentioned, why the framerate in firefox is limited.
http://hacks.mozilla.org/2010/08/more-e … tionframe/

Also, here is a site which seems to test the max number of available frames (just a guess). I dont know why this was just open in my browser right now and how I got there, but I might be of interest. BTW it slows down the browser A LOT.

http://people.mozilla.com/~vladimir/misc/ctest3d.html

Also again, this is probably the way to send optional attributes to the context request:

var ctx = WebGLUtils.setupWebGL(canvas,
       {alpha             : true ,
         antialias         : false  ,
         depth             : true  ,
         stencil           : false ,
         premultipliedAlpha: false }));

I’m not really at the point where I have tested them but I read somewhere that the antialias is working in some os/browser combination, so maybe the others do too.

Have a nice day.

I’ve only seen the antialias setting work in Chrome under Windows - it doesn’t work under Firefox because of the way the image is composited. Chrome doesn’t do it under Linux - presumably because they didn’t implement an OpenGL path for it yet…which probably means it won’t work on MacOS either.

Ive never seen AA work at all (either browser win/linux) but Im not really too fussed ATM this will eventually come.
On a related note I’ld like to see better AF, looking at whats onscreen looks like 1xAF

The problem with benchmarking right now is that the performance depends rather drastically on the application.
true there are a few more factors to worry about, but its always been thus eg PC games are CPU/GPU limited (fillrate,shader speed etc bottlenecked). My point is once webgl goes mainstream we will see websites benchmarking Application X/Y with browser A vs browser B vs browser C (even internet explorer might join the party :lol: ). This will drive the browser makers to drastically improve their overall performances. Honestly I expect within a year for performance to be 10x better than it is now

The benchmarking problem for WebGL is even worse than for PC games though - we have to factor in the performance of the JavaScript engine and also the WebGL/ANGLE layers and performance of the compositing engine. That’s at least three more variables than we’d have to judge for Direct3D or OpenGL implementations alone.

AF? Anisotropic filtering? I don’t think that’s supported in OpenGLES without an extension - so it’s not in WebGL right now.

cheers Steve yes Anisotropic filtering, Im doing a platformer ATM, thus all those glancing? (whats the word) platforms/walls look crap without AF.
ATM as the OP saiz ‘Low fps even on empty canvas.’ atm it seems the compositing is the main bottleneck

Maybe the --enable-accelerated-compositing flag could help in chrome?

https://sites.google.com/a/chromium.org … -in-chrome

aloha

The compositing issue is gradually getting fixed - the developers know it’s a problem…but it’s not such a simple thing to change.

When you know the orientation of the surface to the camera (as is often the case in a platformer where the camera doesn’t roll or pitch much) - you can change the texture to make that glancing angle problem much less. The reason there is a problem is because the hardware for MIPmapping has to pick the worst case texture ‘compression’ and prevent that from aliassing - so (for example) on a wall where the U direction of the texture runs along the length of the wall and the V direction is vertical - the most common problem is when the U axis is compressed and the V isn’t…the reverse is almost never the case for “normal” camera angles - right? So if you make the texture much bigger in the V direction than in the U (so instead of having…say…a 128x128 map, you go with a 128x512) - then when the U-direction compression drops the MIPmap level to (say) the third MIP map then instead of sampling a 16x16 map, it’ll be sampling a 16x64 - which will be much less blurry in the vertical direction.

Simply increasing the resolution of the map in BOTH directions won’t help - if you had (say) a 512x512 map being viewed under similar circumstances, then the U-direction compression would still result in a 32x32 map being viewed. Ironically, a 128x512 map actually looks better than a 512x512 map under these circumstances!

That’s so strongly counter-intuitive that you’ll want to do the experiment to convince yourself that it works.

Only when the “shape” of the map (the width-to-height ratio) is a good match for the worst case viewing angle do you avoid this problem. In effect, you are pre-filtering your map and thereby avoiding the hardware filtering it for you!

Of course in the case of floor and ceiling textures, unless you are in a long thin corridor, you don’t know whether it’ll be the U or the V direction that is compressed - so you can’t get this trick to work. However, you could consider making two different textures - one compressed in the U direction and the other compressed in V and switching between them depending on whether the camera is pointing north-south or east-west.

This technique leads to a more general solution called “RIPmapping” (rectangular MIPmapping) - but to do that properly requires hardware support - and these days, true anisotropic filtering is preferred. However, in constrained circumstances - and without that hardware support - this trick definitely helps!