!!! nVidia and effects (hot topic) !!! --== Woow!!! ==--

glEnable(GL_NO_MSG);

Chinese Resturant in OpenGL, now THAT’S a genius idea. While we’re at it, why not throw in EXT_pizza_hut, EXT_beer_keg, and EXT_candy_store. I will never be away from OpenGL again once these get ratified.

-SirKnight

jwatte and SirKnight, you are both pioneers and visionaries, although it should be a single API though.

EXT_food and EXT_beverage might do it.

Something like:

GLfloat food_params[50] = { GL_SWEET_AND SOUR, 1.0, GL_CHICKEN, 1.0, GL_RICE, 0.5, GL_MSG, 0.0, NULL);

glFoodOrderfvEXT(GL_STOMACH, GL_FOOD_CHINESE, food_params);

I don’t think values should be clamped at 1.0 here but some more work needs to be done on standard quantities since we are talking about mapping to real world values.

The idea of binding the food straight to the stomach and cutting out the need to actually eat it is quite brilliant (If I do say so myself). Do we need other targets to save on digestion or to allow one to enjoy the meal? GL_MOUTH, GL_SMALL_INTESTINE, GL_LARGE_INTESTINE.

EXT_beverage would work in a similar way, should we support cocktails or wait for better hardware?

I was hoping the cooking could be done in hardware using fragment programmability, I think hardware can finally support the thermal requirements, (NVIDIA has the lead here obviously), that way we can avoid the need for ordering, payment, authentication and delivery options APIs.

We can learn from the seminal paper “Ray Tracing Jell-O Brand Gelatin”, Paul Heckbert, SIGGRAPH '87, pp. 73-74.

Originally posted by dorbie:
[b]jwatte and SirKnight, you are both pioneers and visionaries, although it should be a single API though.

EXT_food and EXT_beverage might do it.

Something like:

GLfloat food_params[50] = { GL_SWEET_AND SOUR, 1.0, GL_CHICKEN, 1.0, GL_RICE, 0.5, GL_MSG, 0.0, NULL);

glFoodOrderfvEXT(GL_STOMACH, GL_FOOD_CHINESE, food_params);

I don’t think values should be clamped at 1.0 here but some more work needs to be done on standard quantities since we are talking about mapping to real world values.

The idea of binding the food straight to the stomach and cutting out the need to actually eat it is quite brilliant (If I do say so myself). Do we need other targets to save on digestion or to allow one to enjoy the meal? GL_MOUTH, GL_SMALL_INTESTINE, GL_LARGE_INTESTINE.

EXT_beverage would work in a similar way, should we support cocktails or wait for better hardware?

I was hoping the cooking could be done in hardware using fragment programmability, I think hardware can finally support the thermal requirements, (NVIDIA has the lead here obviously), that way we can avoid the need for ordering, payment, authentication and delivery options APIs.

We can learn from the seminal paper “Ray Tracing Jell-O Brand Gelatin”, Paul Heckbert, SIGGRAPH '87, pp. 73-74.[/b]

Hi

ROFL ROFL

Bye
ScottManDeath

Hello

I just wanted to say that there is a raytracing hardware chip. Its
AR350. I don’t know if it a joke but take a look at www.art-render.com.

Its performance is counted in millions of Triangle/Ray Intersections.
It does radiosity and all the goodies in hardware and it works with
MAYA. I repeat I don’t know if it is a joke. Check it out yourself.

BTW. a message to C++:

You can use OpenGL to speedUp your raytracing a LOT (especially for
primitives like speres etc.). The trick is to render a scene to bitmap
(WGL_RENDER_TO_BITMAP) and make a DIBSection. The section will grant
you access to its assigned memory from which you can directly read. It
is muuuuch faster than read ReadPixels. It depends only on AGP speed.
The rendered scene will show you exactly where are the requested
triangles. So You trace only once per triangle in first pass. The
other intersections you can make with octrees. The results are as
follows:

Scene 10000 tris (phong, multiple light sources, textures (without
interpolation) 40 - 50 fps on AMD 1800+ and 4xAGP.

One pain is only with triangle edges. You can find some artifacts,
which You eliminate with octrees. These artifacts are only painful
when rendering with real phong.

I like raytracing :stuck_out_tongue_winking_eye:

C YA.

Scene 10000 tris (phong, multiple light sources, textures (without
interpolation) 40 - 50 fps on AMD 1800+ and 4xAGP.[/b]

demo available? what resolution? i’m interested to see that

WGL_RENDER_TO_BITMAP is not accelerated, as far as I’m aware.

Now THAT’S funny!

>>>Its performance is counted in millions of Triangle/Ray Intersections.
It does radiosity and all the goodies in hardware and it works with
MAYA. I repeat I don’t know if it is a joke. Check it out yourself.<<<

Yes, I’ve seen this before. Why would it be a joke? It’s using 8 chips and its not millions but 1.1 billion test/sec.
I think chip speed is 137Mhz.

And they say it was designed in 1997.

All they need to do is reduce it to 1 or 2 chips and mass market it, make a new extension for GL and there you go.

chinese restaurant
how did that get into the conversation?

C++, you are a silly boy : never say to everybody they are bad programmers… it was contributors that are long time professionnals in CG programming here !

Lets say that raytracing is a global scene processing, OGL is not. You’d better studie CG algorythm than starting crappy threads.

Yes, AR350 is a ray/triangle intersection calculator : it provide the result of a ray (position+orientation) and a triangle (3 points) in one clock. So, it compute ray/triangle intersecting very faster than a CPU ! This processor can run in paralel.

GPU ARE NOT build to do raytracing : the most important thing is to understand that for ray tracing they should have only the advantage to process paralel tests. No more : you must build a paralelized CPU that will integrate ray/triangle intersection and mesh casting algorythmes to enhance ray-tracing speed.

Opengl never done raytracing, and I think it is not planned for a long time (it doesn’t integrate the scene concept), and it was actually no relevant advantage to do it with GPU.

Gaby

Daveperman:

It’s not so pretty at it seems. I mentioned it was only a first
pass (so without reflection) and resolution is 256x256 - I’ll use it
as dynamic texture.
But when i’ll finish this engine I’ll give a sign for sure.

Bonzaj

P.S

If You want to see sth really exciting about realtime raytracing try www.openrt.de

So the hardware exists for realtime raytracing. I guess it will be some must to have feature for the next generations of GPU, i mean i 3 or 4 years perhaps. who knows…

wait and see :slight_smile:

Of course we are all bad programmers. What do you think how we manage to find times to read your posts.

Hey, Gaby!
Dont offend on me! I hadnt in view You and all other frequent contributors, when I told BAD programmers!
I wanted receive answer: “No, it hasn`t!”. And it is all!
And I received it at the beginning of this theme (from Jan2000)!
Thank you for attention!

dorbie, you missed an extension or two I just made available on my site. GL_GIMME_CORONA_WITH_LIME_OR_LEMMON_NOW

I get about 150 fps in one pass. It has some very realistic blurring effect, and if you multitexture it with an extension like
GL_GIMME_A_SHOT_OF_RUM_NOW or GL_GIMME_A MARGARITA_RIGHT_NOW, you get astonishing blurring effects, but your fps will drop to about 0.5

Here is a shot of my code.

if( GL_CORONA )
{
GL_GIMME_A_SHOT_OF_RUM_NOW
}
else
{
GL_GIMME_A MARGARITA_RIGHT_NOW
}

I will give any updates I have, but you can add these very useful extension to your glext.h

La Mancha

Hey also we need to add drink mixing support in a simple function, say glBlendDrinkfvEXT(…) or something similar, kinda like the glBlendEquationXXX function in a way but more powerfull.

Or, how about using HLSL to combine drinks and maybe food. Just think of the recipies you can create in a HLSL with no limit on shader length.

-SirKnight

[This message has been edited by SirKnight (edited 04-08-2003).]

Originally posted by C++:
Hey, Gaby!
I wanted receive answer: “No, it hasn`t!”. And it is all!
And I received it at the beginning of this theme (from Jan2000)!

Obviously not. If all you wanted was a “no” answer which was given in the first reply to your post by Jan2000 like you say, then why did you keep comming back asking the same question like no one understood what you meant?

-SirKnight

[This message has been edited by SirKnight (edited 04-08-2003).]

The explanation is simple SirKnight, in the process of inflicting this thread on us he’s learned more about the subject. With this new found knowledge he is finally able to go back and understand the first response posted. Finally he has sufficient clue to realize that those early posts he flamed people for actually were informative and he’s been acting like an idiot all along. Unfortunately he has missed the other part of the lesson and instead of eating crow pie, has decided to pie everyone in the face again by pretending he got it and asking us why we bother responding to his followup questions and insults when the first post was the answer he needed. Perhaps we’re supposed to miss the obvious inconsistency and think he’s a genius instead of <snip>.

[This message has been edited by dorbie (edited 04-08-2003).]

DONE!!! I am still experiencing some blurs from the tests…

glBlendDrink4f( GL_RUM, 30.0, GL_VODKA, 10.0, GL_WATER, 0.0000001f, GL_ICE, 0.5f );

I’ll tell you, the GL_WATER is hurt by the evil floating point, but it is works pretty decent.

The bad thing is that this extensions are expensive, and very processing intensive. Hopefully we will have them on hardware soon.

Originally posted by SirKnight:
[b]Hey also we need to add drink mixing support in a simple function, say glBlendDrinkfvEXT(…) or something similar, kinda like the glBlendEquationXXX function in a way but more powerfull.

Or, how about using HLSL to combine drinks and maybe food. Just think of the recipies you can create in a HLSL with no limit on shader length.

-SirKnight

[This message has been edited by SirKnight (edited 04-08-2003).][/b]

By the way, I think the whole ray tracing with a GPU is not going anywhere… So, why dont ya quite it.

La Mancha

Hey you all who joking above my question!

#include <windows.h>
#include <gl\crap.h>

class Contributor : public TStupidProgrammer
{
public:
bool OffendedC++;
};
Contributor MyTopic[ContributorsCount];
HCRAPC cc;

int WINAPI WinMain(HINSTANCE, HINSTANCE, LPSTR, int)
{
cc = wglCreateCrapContext();
for(int i = 0; i < ContributorsCount; i++)
{
if(MyTopic[i].OffendedC++)
{
glEnable(GL_KICKING_ASS);
glKickAssHint(GL_ABSOLUTE_HURT, GL_OH_YEAH);
glBegin(GL_REVENGE);
glReplaceHisBrainsWithCrapEXTcv(&MyTopic[i]);
glTearOffHisLimbEXTcv(&MyTopic[i]);
glKillHisPussyCatEXTcv(&MyTopic[i]);
glEnd();
}
}
return 0;
}

And now try to compile it!
May be your 3D-card supports these functions…