I just wanted to know the hard coder opinion of this community about a new technique, called PolyCube-Map, for texture parameterization that is: [ul][li] seamless, mesh-independent (e.g. LOD robust)  with a low-distortion,  texture space efficient,  mipmap robust, [*] if you need it can be user controlled (YOU can decide what are the important portions of the mesh)[/ul][/li]This technique was recently developed by our group (vcg.isti.cnr.it) and that we will present it at next siggraph. From the abstract:
Standard texture mapping of real-world meshes suffers from the presence of seams that need to be introduced in order to avoid excessive distortions and to make the topology of the mesh compatible to the one of the texture domain. In contrast, cube maps provide a mechanism that could be used for seamless texture mapping with low distortion, but only if the object roughly resembles a cube. We extend this concept to arbitrary meshes by using as texture domain the surface of a polycube whose shape is similar to that of the given mesh
Obviously the technique can be implemented in opengl with current hardware.
More info, pictures, a paper and a short movie at the following link:
Your comments, opinions and critics are very welcome.
I haven’t seen any real problems with seams in modern games. This is more of a solution looking for a problem.
I thought the paper was excellent. I’ll be awaiting the downloads.
I’m curious to see how your technique would works with texture space rendering. Something like described in ATI’s skin shading paper from this year’s GDC. Have you experimented using PolyCube-Map with any of these techniques?
Yes! you got the main point, until now seams were mainly a artist issue. From the point of view of the coders you just need to handle the fact that the same point with the same vertex and normal could have more than a texture coordinate.
Now with the coming of render to texture algorithms seams are an issue also for us. Look at ATI slides for skin rendering (slide 21: first item “texture seams can be a problem…” ): they have to cope to the discontinuity of the parametrization in a explicit way.
With the polycube maps texturing coords will be something that is really ‘per vertex’ (no needs of sending the same vertex to the gpu with different tex coords), so simpler data structrures, better striping ecc.
Agreed. Every modeller I know constantly bemoans texturing. One of my coworkers loves Z-Brush, but the mappings it generates are less than ideal.
In my opinion, the motivation for a technique like this is a no-brainer. The closer the intermediate surface resmembles the actual surface being mapped, the better. This is an excellent solution.
The paper mentions the problem with the shift operation in current hardware, and the associated fix. I wonder if near-future hardware will support bit operations, and have the fragment horsepower to handle this technique. I certainly hope so.
great work. can’t imagine how happy my favourite artist friend is, once he gets that link later this day. currently, he’s sleeping, so i hope he has nice dreams, and not nightmares about texturing
and i, on my side, am rather happy with it, too… this could help programmers art, too, as they are even less keen at texturing
This is an excellent solution.
Except for the whole performance issue. This isn’t like parallax bump mapping, which has a fairly negligable impact on performance (compared to having bump mapping at all) for a massive impact on visual quality.
This technique trades per-fragment performance for artist time and some small per-vertex performance benifit in terms of stripping. I, for one, don’t believe in trading performance for anything less than a non-trivial visual quality improvement. We’re talking about operations that are non-trivially complex.
Artists, while they may bemoan doing it, still can do it. As such, the problem is solved.
Now, if hardware starts implementing this, such that my texture accesses can automatically do it, that’s a different story. But, until then, I can find more important things to spend performance on.
korval: this is RESEARCH. we know you don’t give fuzz about that, but this is research, and never ment to be fast by the start. once we have the demo, people can start to optimize and compare against it, and next-gen-hw can start to show features that help to optimize it.
but we see more and more the invisible part of the power of fragment shaders: the ability to do anything to texture. different shaders to sample different sorts of shadow-depth-maps, different shaders for autogenerated texture maps in this case.
the more we can automatize this whole process, the more artists can do art. this can help solve most of the lightmappers issues of seams, it can help autogenerating doom3 style multilod bump/parallaxmapped characters out of high-res-models, etc.
lets not start a flamewar about this just because of performance. this is never an issue if you develop something new. once its working, it starts to be one. never before.
just delay your game because of some sourceleaks for another half or full year, and hw advanced much enough to be fast enough again
would love to see this in “normal” 3d apps (3dsmax, lw…) as well, cause uv mapping hi-res stuff for texture baking is definetely a “no fun” thing.
definitely. instead creating a lowres “voxel”-version of your mesh, and using that as a base to do polycubemapping on it could be very easily doable in an editor.
damn artist, still sleeping
Hmm… This is an interesting technique, specially because it’s very different to the traditional automatic texture mapping algorithms. However, it requieres additional cost at the fragment level. I haven’t read the paper yet, but at first glance it seems that you have to do the texture filtering manually… it would be nice if this was hardcoded and handled by the hardware, or if we had enough fragment power to handle this at lower cost.
The though of using cubemaps to do standard texturing had crossed my mind as well, but I rejected it quickly
I think this idea has a very good potential, except for the encoding in the upper left which seems ugly.
Instead, having a kind of index number per vertex would seem cleaner. One could use the fourth coordinate since it is wasted on accessing cubemaps.
I will read the paper just after but I was wondering how big can a polycubemap be with the current approach.
any ETA on the running demo ( and code ) ?
yes, performance is an issue, the fragment program is rather long (57 instructions), but both the speed of hw and the lenght of allowed fragment programs is increasing a lot. Moreover the cost is amortized if you are using more than one texture with the same parametrization; probably this is the most common case for complex shaders.
The advantages of this techniques (apart being artist-friendly) are many. Probably one of the most important is the independence from the mesh topology: the same texture can work for different LOD models, and you can simplify your model without constraining the mesh to maintain the same seam topology.
Moreover we (and many other here ) think that polycubemaps can lead to more artist friendly texturing tools. And this can lower the overall cost of game developement (and producers need more artists than engine designers…).
Obviously if hw could support polycubemaps, everything would be easier and hidden to the users.
remember to let us know the opinion of your favourite artist friend!
Yes, having bitwise operations in fragment shaders will help a lot.
i agree, the coding of the LUT in the corner of the texture is not a wonderfully clean design choice; in theory it is something that is conceptually different from the texture itself, but with current hw and fragment program the texture itself was the best place for storing it.
Availability of the code? surely before Siggraph , we are cleaning up our library ( vcg.sf.net ) so stay tuned!
Thanks to everyone for the insightful comments!
Go on please! I really would like to hear some informal comments from the HW guys…
the fragment program is rather long (57 instructions), but both the speed of hw and the lenght of allowed fragment programs is increasing a lot.
It’s going to be quite some time before a 57-instruction fragment program can be considered worth the expense. Like possibly even post R500 (though some of the high-end R500’s can probably handle it).
Even a 500 instruction shader… another 57 opcodes is still 10% of that, and adding 10% more opcodes is going to cause a slowdown if you’re pushing the hardware.
Moreover we (and many other here ) think that polycubemaps can lead to more artist friendly texturing tools.
That isn’t the issue. The issue is taking performance to solve an art-creation problem. It is very difficult to justify 57 per-fragment opcodes just to make your modellers/texture artists lives easier.
And this can lower the overall cost of game developement (and producers need more artists than engine designers…).
Except that you now need better engines (and engine designers), because you have to optimize your code more/remove features to make up for the 57-opcodes-per-fragment that you’ve taken up.
Obviously if hw could support polycubemaps, everything would be easier and hidden to the users.
Behind you 100% on the integration-into-texture-units front.
yes, i know, artists usually draw texture by hand and have a pixel-wise control of their job polycubemaps can give the artist a good control on how unwrap their models and without incurring into seams. For example lets take a look to a model (totally random entry: gothicgirl and consider the unwrapping of the head) often artists unwrap the models along their main feature shapes (a head is unwrapped like a cube, the leg as a cylinder, and so on); with polycubemaps this approach still works: the underlying polycube has the structure that the artist like more. In this way it is straightforward to think to a sw tool that assemblies the squarelets in a planar bitmap for straightforward classical editing, but without seams, etc.
Polycubemaps are not totally automatic (for now) the shape of the polycube can be choosen by the artist. On the other hand polycubemaps works quite well also for normal map redetailing low poly meshes (aka detail preserving ).
Content creation is becoming the bottleneck in games; at least that’s what prominent developers have been saying recently.
While this might not be immediately applicable across the board, due to the performance impact, it’s still a step in the right direction.