Workflow to make use of Export Normals

Comments

4 comments

  • Avatar
    Jonathan_Tanant
    My personal workflow :
    -reconstruct in high detail in RC.
    -simplify to something hi poly but still workable (between 10 Millions and 50 Millions).
    -export this model to OBJ (this will be my hi poly model) with normals.
    -simplify to low poly (between 10k and 200k).
    -Unwrap and texture to exactly 1 texture (maximal texture count to 1 in RC).
    -export this model to OBJ (this will be my low poly model) with normals.
    -import both models in Blender.
    -select low poly, go to edit mode, UV image editor, create image with same size than my RC diffuse.
    -bake hi poly to low poly to create normal map, save image.
    -bake hi poly to low poly to create ambient occlusion map, save image.
    -import everything in Unity and configure the material to point to the 3 maps.
    0
    Comment actions Permalink
  • Avatar
    Benjy
    Hi Jonathan,

    Okay, all make good sense and the workflow seems highly procedural within Blender, so could be batched with scripting. I'm wondering about a few things, for starters, but probably not something easily answered, will need to jump in, test, and problem solve. To leverage occlusion culling, I don't want to export a large Reconstruction Region as one mesh with one texture. At the same time, there's no cost breaking a large model (at the chunk size of the Reconstruction Region within a larger set) into many smaller mesh parts based on max verts per part, unless I go too far with numbers of mesh parts inviting too many draw calls on the CPU. What this spells for texture count then introduces a wrinkle. The mesh parts are too small were it doesn't warrant one texture per mesh, but the group of meshes within the Reconstruction Region is then also too big for just one texture, even a 16K. I'll have 100 some meshes sharing some other number of textures. So, that's one thing to work out, how to bake normal maps when there's not this one to one relationship between mesh and texture map.

    I still wonder about the data being thrown out from High on that first Simplify. You're coming down from, what, 150-500 million tris in a mesh at High to 10% or so for your high poly mesh. Think how much cleaner your normal maps and cavity maps if you had those 90% back. Isn't this yet another case for exporting in parts across multiple textures, if we could figure out the messiness involved? Say you're High monolithic mesh was 500 million tris, set max vert count at 10 M and export 50 meshes that then could be batch processed, divide and conquer. Yes, much processing time, but as long as you're not touching the machine, I see big gains as well.

    I'm sure some level of decimation is only smart, diminishing returns into the higher frequency detail predictably, but isn't it reasonable to think that much is being lost w.r.t. the fidelity of the normals in that 90%?
    0
    Comment actions Permalink
  • Avatar
    Jonathan_Tanant
    Yes, you are right.
    The number of textures are very important when we use the models for realtime 3D (Unity, UE...).
    Because more textures mean more draw calls, and we want to avoid them at all cost, especially on mobiles (and even more on mobile VR, because we render everything 2 times + some more overhead).
    Another thing to take into account is that in Unity (I don't know for Unreal), meshes are limited to 65k triangles, and if you have a bigger mesh, then it is split in 65k max triangles and that means more draw calls.
    So it comes out that from my experience the texture count should be roughly equals (that is a cook recipe) to how many 50-65k meshes I have in total.
    But it would be great to have more control over the way the models are splitted and textured. e.g., I would love to have a way in RC to have an automatic splitting of large spaces in, e.g. 50k triangles areas each textured with one texture. This way you optimize the draw calls.

    I still wonder about the data being thrown out from High on that first Simplify. You're coming down from, what, 150-500 million tris in a mesh at High to 10% or so for your high poly mesh.

    Yes, right, this is just that a 50M triangles model already takes something like 5 GB on my disk and a few minutes to load.
    Ideally, this would be great to have the baking of normal maps and occlusion/cavity maps in RC.
    0
    Comment actions Permalink
  • Avatar
    Benjy
    Jonathan,

    Been meaning to reply, deep in the data mines...

    Thanks for your input. I'm encouraged to believe these bottlenecks will recede in the rear view mirror, but Amen to your thought having RC bake normal maps. The more workflow kept under one roof, the more attractive this already awesome solution. I don't know how involved a project this is for engineers, but you'd think the heavy lifting making normal maps and cavity maps based on High reconstructions would already be in place. And if it's too out of Capture Reality's wheelhouse, they might consider partnering with Substance Builder or such, integrate those tools as a plugin. If you'll post this in the Feature requests forum, I'll echo support, but maybe Milos sees it here and shares his thoughts.

    To your statement:
    Because more textures mean more draw calls, and we want to avoid them at all cost, especially on mobiles (and even more on mobile VR, because we render everything 2 times + some more overhead).
    My work isn't pointed at mobile presently, I can assume beefy GPU specs like a GTX 1080 Ti or latest flavor. Together with Granite for UE4 and Unity, the promise of texture streaming literally renders concerns over limitations to texture largely moot. All the more reason to pack fine geometry into numerous 16K normal maps. It would seem the bigger question today is what's required to glean all the value from a reconstruction in High and get it into large normal maps.

    In the meanwhile, how many draw calls would you suggest is acceptable with a GTX 980 Ti? I was seeing 160 at peak with a test mesh 450K polys (no limit in UE4) with ten 4K textures, frame speed never dropped below 90. I just wanted to see what was in the data, sheer beauty. But, I do need to set a budget for draw calls that provides for a large set. The plan I've been following is to break the set into fairly large, nested Reconstruction Regions, parts set to 5 M max vertices, getting between 70-150 parts, depending on what's inside those RRs. The set in UE4 will be broken into Level Streaming Volumes (LSV), possibly new language to you in Unity. User in one level enters an overlapped region to a second LSV, so while you're still in the first LSV the second one is loaded, and when user leaves that overlapped region, the first level is unload. If I'm seeing it right, I'd then want to plan for the lowest common denominator, poly count, number of meshes, number and size of textures, for any two levels sharing an overlapped region. Do you have any sense for how many draw calls are too much for the GTX 980 Ti?
    0
    Comment actions Permalink

Please sign in to leave a comment.