Question: Optimizing resolution using Optimal Texel Size and normal maps
It seems I'm only partially understanding how to optimize resolution in the geometry and texture info, thanks if someone can fill in the gaps. Reconstruction at High, immediately followed by Unwrap using Fixed Texel Size to then return the Optimal texel size (okay, first Clean), produced a component 58 M tris with three 8K maps, texture quality shows 100%, texture utilization 73%, which I can see with one of the 8K maps being largely empty. I could improve texture utilization by switching to 4K maps, whatever.
If I then Simplify to 5 million, I note that the textures are preserved, still three 8K maps. First question, if I hadn't unwrapped at 58 M, rather waited to unwrap at 5 M, again using Fixed texel size, should I expect to see the same three 8K maps? That is, is the Optimal texel size calculated based on the resolution of the source data, whatever resolution is returned during reconstruction of a given component, or if a simplified model is first to be unwrapped, might Optimal texel size be constrained by the size of the tris in this decimated model?
A second question concerns how RC calculates normal maps. Is the amount of detail in the highest res model within a component what's being written to normal maps? It's my understanding that RC requires a minimum of seven pixels to generate a triangle, yes? If so, then obviously there's a ton more texture information in the source data begging to be introduced, which I'm playing with now in Mari. Since some surfaces may be smooth but feature lots of color detail, you wouldn't want to globally tap source data for texture info in producing normal maps (in RC), the answer being to selectively paint in which high frequency details in the source data do belong, e.g. a knurled buckle, tiny cracks in clay, etc.
The first question relates to the second in that, if you're going to be round tripping through Z-Brush or Mari or Mudbox to paint texture info into a model in order bake normal maps, you want to simplify in RC to a sweet spot that preserves silhouette, while optimizes performance downstream, but the number and size of your maps is what I don't want to see dropped, since that threatens what it takes to optimize for gleaning every ounce of detail in the source data if there aren't sufficiently large and quantity of normal maps showing up in whatever 3rd party app used to paint in those details.
Many thanks for a thoughtful response.
Benjy
-
Hello Benjamin,
thank you for taking your time to place your question clear and specific.
To the first question, the optimal texel size should be calculated based on the scale of the model and the polygon count as far as my knowledge goes. Therefore as you questioned, the two models could differ in the unwrap results as they have very different topology.
The second one. Here I am not sure if I got it right. But anyways, RC calculates it with the same method as most of the softwares out there. It is ray casting based on an average search distance between the source and the target model. The result of the normal map is just the difference of the two surfaces written into the pixel values. The higher the resolution the more micro detail of the source model is projected and then rendered on the simple model.
To give you a simple tip which I use in my workflow that helps the last sentences of yours. Export the hires model for surface cleanup, bring it back to RC. Simplify it to about 3 or 5M and let it out for the texture cleanup, then bring it back and reproject the diffuse texture onto the hires cleaned model. Then you can simply create the optimized model of the last one and have everything correct. -
Hello Erik,
Thanks for your detailed reply. How does scale influence optimal texel size? It would seem arbitrary to think that just because the scan is of something big or small, their might not be a need or interest in preserving the finest details, given a virtual camera might closely graze either environment.
To further clarify, recall in my case I had unwrapped for optimal texel size in the hi-res model to yield three 8K maps. Say I hadn't done that, instead went ahead to simplify to lo-res model, would Unwrap using optimal texel size then be based on only the lower res model of this component? If so, is it possible it would still yield the same three 8K maps? I can test this of course, just curious. Given the amount of redundant information in the hi-res model within the smoother areas, wouldn't that predict that with simplify, which is adaptive and thus will lower the relative poly count in those smoother areas, that optimal texel size might stay the same down to some lower threshold? If that makes sense.
I'm unclear what you mean with taking a simplified model "3 or 5M and let it out for texture cleanup". Do you mean run that mesh through 3rd party app to further clean geometry?
Thanks.
Benjy
-
Well I am more of spreading my thoughts here, I don't know the whole algorithm, but it makes sense that if model is large, it would need more textures for example, that is the job of fixed texel size as well.
Regarding your question to the simplified models. I believe the tool always works with the information of the selected model and no other. I doubt it would consider the texel size based on the source one rather the one you've selected.
Yeah, that is what I meant. Said that cause I know you use such apps in your workflow due to a video from you, so I thought I could give a tip. -
I'm curious what kinds of edits you're working with when round tripping textures and why not do that after final export, why the round trip? The UV shells being organized the way they are, for rough topology it's obviously next to impossible to discern features for editing in order to target them, assume you're doing some kind of global adjustments?
Please sign in to leave a comment.
Comments
5 comments