[slightly OT] Free renderbump tool beta

(Sorry if this is a slightly off topic post, however, you guys are the perfect target audience for this…)

We need beta testers for a renderbump tool, which will be freely availible once it’s past beta testing.

Please test the tool on your models and send feedback to me: christian.seger@bth.se

The tool features

  • Import of .ASC, .ASE, .OBJ formats
  • 3 different Renderbump commands
    • Renderbump1 - renders one bump map for the entire model (using imported texture mapping)
    • Renderbmp1Multi - render one bump map for each material found in model (using imported texture mapping)
    • Renderbump2 - renders one bump map with automatic texture packing (equal size)
      This is intented for models that lack texture mapping, however
      Renderbump1/1Multi often produces better results because of
      hand made texture mapping
  • Exports to .ORB (ascii) format (triangles with all normals, tangents and texture coordinates)
  • 3D view of resulting renderbump map using DOT3 bump-mapped lighting
  • Goraud shaded model next to it for camparison

… and a whole bunch of other features

Please test it and report bugs, feature requests…

ALSO, we need high polygon models with texture mapped low polygon versions for testing and demo purposes, if you have any or know of some good once, please let me know.

The address is: http://www.soclab.bth.se/practices/orb.html

Hope you like the tool. I’m looking forward to some feedback from you guys.

Christian Seger
SOCLAB

How do you find the corresponding high-detail version? Do you do the ATI thing of finding the closest triangle using ray casting, or do you do the GDAlgorithms thing of finding the corresponding texture mapping coordinates (assuming unique mapping)?

Originally posted by jwatte:
How do you find the corresponding high-detail version? Do you do the ATI thing of finding the closest triangle using ray casting, or do you do the GDAlgorithms thing of finding the corresponding texture mapping coordinates (assuming unique mapping)?

Getting the high-poly triangles that corresponds to a single one in the low-poly is quite easy, it’s a multi-check thing.

  • Planes around the low poly triangle form a frustum against we can do a frustum check
  • bounding spheres around the low poly
    and so on…

Getting the high poly vertex positions in the texture map is much harder. And more difficult to make “perfect”… However, our
approach doesn’t use ray-casting like
Crytek’s.

I will explain how it is done in an article later this spring, and source code will probably be available on the homepage, so stay tune =)

Now I’m really concerned with making it stable and reliable…

> However, our approach doesn’t use ray-casting
> like Crytek’s.
>
> I will explain how it is done in an article
> later this spring, and source code will
> probably be available on the homepage, so
> stay tune =)

That sounds interesting, but what’s wrong with ray-casting? I also have a little max plugin that uses raycasting to capture the detail and works nicely:
http://talika.eii.us.es/~titan/magica/

The other approach I’m aware of, is to compute a dual-map between the low- and high-resolution objects. That can be done computing an automatic parameterization of the high-poly object and reducing the polygon count. The method is fast and reliable, but places some restrictions in the high-resolution object.

Could you explain what are the benefits of your method compared with the previous ones?

You can also do it in hw. For every lowres triangle, transform the highres model into tangent space, and use the gpu to render it into the framebuffer. To get it to rasterize the normals, you put the highres vertex normals in the color channel, and have the gpu interpolate.

I still think the ray-tracing method is more flexible, and generally works/looks better though.

John

Originally posted by John Pollard:
[b]You can also do it in hw. For every lowres triangle, transform the highres model into tangent space, and use the gpu to render it into the framebuffer. To get it to rasterize the normals, you put the highres vertex normals in the color channel, and have the gpu interpolate.

I still think the ray-tracing method is more flexible, and generally works/looks better though.

John[/b]

Bingo!
That’s pretty much what we do… =)

I’m thinking about implementing a ray-casting version too, compare results, and use
the best approach.

castano: downloading your plugin now, gonna
try it out.

What about a Linux version?

(I can’t test Windows programs without Windows (Wine is no option))

Originally posted by richardve:
[b]What about a Linux version?

(I can’t test Windows programs without Windows (Wine is no option))[/b]

I know =)
I tried to separate OS specific code, and a port to linux might happen, however not in the nearest weeeks/months, I’ve got alot to do before I can take time to do that, sorry.
But made sometime this spring…