Unwrap parameters question
I need some clarification around how Unwrap using Optimal is influenced by Maximal texture count v. Fixed texel size v. Adaptive. If I'm after gleaning 100% of what's in the data, I'd think Maximal texture count would be appropriate. Yes? Does Fixed mean every triangle is the same size, and if so, what size triangle - smallest to largest in the model - is it fixed to? I take Adaptive to mean the features in the model are considered and given relative weight in determining how much pixel real estate is given sections of geometry, based on how detailed the surface topology. I'm not sure what I just said, i.e. would subject matter that's rougher be given more pixels or the other way around?
I'm looking at the output from Optimal using Adaptive, not understanding why the relatively flat ceiling would get more pixels than the books on a shelf. Push in and note the size of the ceiling vent, looks easily 4-5 x as large as the spines of books toward the bottom.
Thanks for your help.
-
I used fixed as I want consistency of resolution, CR captures way more detail than is needed for a lot of end uses anyway. Ref your ceiling it is possibly to do with all the micro surface detail being captured (see my previous sentence)lol., by comparison books are fairly simply geometry. decimate the mesh and project all that detail on it as a normal map or a displacement map.
-
I've also come to that conclusion, Fixed is best. But, how is it best, in relation to what? You contend the micro surface detail in the popcorn ceiling contains more surface area, the Adaptive setting affording more pixel real estate there than to the smooth books on a shelf. What's not so evident is the fact that this top shelf contains an enormous quantity of super detailed belongings, e.g. fruit bat skeleton, antique film cameras, figurines, etc. I'm absolutely positive the surface area contained in all of those items sitting on the top shelf is well more than what's in the tiny popcorn. This makes me question if the Adaptive setting acts in reverse to the more intuitive thought behind placing value on detail, not on more simple surfaces. Yes?
-
Possibly, all I offered was my best guess, Its interesting that the ceiling uses more of the UV space as well. I am doing as much learning as I can on this software if I come across an answer I shall let you know. I love photogrammetry and intend to follow it as I career once I graduate from my VFX degree.
-
I've reconstructed the entire room, some 2800 42 MP images shot from 1'-4' away. There's yet ample detail from what I could tell pushing a clipping box around the bric-a-brac, using Adaptive have 127 8K maps among five reconstruction regions. That's downscaled 2:1 using Normal, yet curious to see what happens with High and/or with Fixed. Indeed, gleaning the full value of the data is no walk in the park. I'm testing the limits of the capture pipeline, also experimenting with real time playback of point clouds, these of the undecimated models. I'd gladly post a render from RC, alas I blew a valve on my GPU, most likely caused by Unreal Engine badness. Only UE4 was blowing up, now RC throwing Cuda A unknown error 30, believe this is an example of something we didn't use to believe was possible in PCs, software affecting hardware. Maybe, later after I recover I'll upload some renders from RC.
-
I typically decimate before unwrap/texture, this test strictly for streaming the full-res point cloud and seeing what's in the data. As a measure of polycount, a recent scan in a cave limited to some 600 images in a 25' x 25' space and run at High yielded 1.7 billion tris. Clearly, much of this data can be thrown out without appreciable loss to what you'll see from a reasonable (virtual) viewing distance. That said, we're interested to explore that data and discover things you're either strained to view in the real world or simply can't see. What also stands to reason is what happens when you decimate very narrow subject matter, hard edges, tight crevices, etc. The overall effect of decimation on models moves toward the look of shrink wrap or melted butter. That's why offloading that high-frequency detail to normal maps is so valued, though that won't save you on narrow geometry like thin sticks or cabling. Thus far I've gotten away with loading UE4 with some 43 million polys and sustaining frame rates in the 90s by leveraging occlusion culling, this made possible by RCs option to control max verts per part and export a big model in parts. What a folder containing hundreds of chunk meshes means for normal maps then is having a batch processing script to point at that folder. An RC user here posted about such an animal, but can't a response. We'll see, will shake loose.
-
Hi Misho,
I indicated that somebody posting here said to have developed such a plugin for batch processing a folder of chunk meshes in ZBrush, including normal maps, this post suggesting he'd freely share that with RC users. I tried reaching out to him but never got a response. Here's the post, maybe you can give him a poke:
see DotScot1's posts
Best,
Benjy
Please sign in to leave a comment.
Comments
11 comments