Unwrap parameters question

Comments

11 comments

  • Avatar
    Tim B

    +1 for this. Been wondering the same thing. 

    0
    Comment actions Permalink
  • Avatar
    brek01

    I used fixed as I want consistency of resolution, CR captures way more detail than is needed for a lot of end uses anyway. Ref your ceiling it is possibly to do with all the micro surface detail being captured (see my previous sentence)lol., by comparison books are fairly simply geometry. decimate the mesh and project all that detail on it as a normal map or a displacement map.

    0
    Comment actions Permalink
  • Avatar
    BenjvC

    I've also come to that conclusion, Fixed is best. But, how is it best, in relation to what? You contend the micro surface detail in the popcorn ceiling contains more surface area, the Adaptive setting affording more pixel real estate there than to the smooth books on a shelf. What's not so evident is the fact that this top shelf contains an enormous quantity of super detailed belongings, e.g. fruit bat skeleton, antique film cameras, figurines, etc. I'm absolutely positive the surface area contained in all of those items sitting on the top shelf is well more than what's in the tiny popcorn. This makes me question if the Adaptive setting acts in reverse to the more intuitive thought behind placing value on detail, not on more simple surfaces. Yes?

    0
    Comment actions Permalink
  • Avatar
    brek01

    Possibly, all I offered was my best guess, Its interesting that the ceiling uses more of the UV space as well. I am doing as much learning as I can on this software if I come across an answer I shall let you know. I love photogrammetry and intend to follow it as I career once I graduate from my VFX degree. 

    0
    Comment actions Permalink
  • Avatar
    BenjvC

    Perhaps, RC devs care to weigh in what value they had in mind with Adaptive and if my observation accurately reflects what's going on.

    0
    Comment actions Permalink
  • Avatar
    Tim B

    Benjamin,

    What does the mesh of that scene actually look like? 

    Regarding Andrew's comments, RC does seem to allocate way too much surface roughness to flat surfaces. Would be interesting to see how the mesh of the ceiling compares to the mesh of the shelves. 

    0
    Comment actions Permalink
  • Avatar
    BenjvC

    I've reconstructed the entire room, some 2800 42 MP images shot from 1'-4' away. There's yet ample detail from what I could tell pushing a clipping box around the bric-a-brac, using Adaptive have 127 8K maps among five reconstruction regions. That's downscaled 2:1 using Normal, yet curious to see what happens with High and/or with Fixed. Indeed, gleaning the full value of the data is no walk in the park. I'm testing the limits of the capture pipeline, also experimenting with real time playback of point clouds, these of the undecimated models. I'd gladly post a render from RC, alas I blew a valve on my GPU, most likely caused by Unreal Engine badness. Only UE4 was blowing up, now RC throwing Cuda A unknown error 30, believe this is an example of something we didn't use to believe was possible in PCs, software affecting hardware. Maybe, later after I recover I'll upload some renders from RC.

    0
    Comment actions Permalink
  • Avatar
    Tim B

    Good luck with that valve job, Benjamin!

    Have you tried decimating then texturing? I'm guessing your mesh is pretty large based on the clipping box comment. 

    Curious how/if that changes things. 

    0
    Comment actions Permalink
  • Avatar
    BenjvC

    I typically decimate before unwrap/texture, this test strictly for streaming the full-res point cloud and seeing what's in the data. As a measure of polycount, a recent scan in a cave limited to some 600 images in a 25' x 25' space and run at High yielded 1.7 billion tris. Clearly, much of this data can be thrown out without appreciable loss to what you'll see from a reasonable (virtual) viewing distance. That said, we're interested to explore that data and discover things you're either strained to view in the real world or simply can't see. What also stands to reason is what happens when you decimate very narrow subject matter, hard edges, tight crevices, etc. The overall effect of decimation on models moves toward the look of shrink wrap or melted butter. That's why offloading that high-frequency detail to normal maps is so valued, though that won't save you on narrow geometry like thin sticks or cabling. Thus far I've gotten away with loading UE4 with some 43 million polys and sustaining frame rates in the 90s by leveraging occlusion culling, this made possible by RCs option to control max verts per part and export a big model in parts. What a folder containing hundreds of chunk meshes means for normal maps then is having a batch processing script to point at that folder. An RC user here posted about such an animal, but can't a response. We'll see, will shake loose.

    0
    Comment actions Permalink
  • Avatar
    misha

    Hey Benjamin, what are you using for getting normal maps from the hundreds of chunk meshes?

    thanks,

    Misho

    0
    Comment actions Permalink
  • Avatar
    BenjvC

    Hi Misho,

    I indicated that somebody posting here said to have developed such a plugin for batch processing a folder of chunk meshes in ZBrush, including normal maps, this post suggesting he'd freely share that with RC users. I tried reaching out to him but never got a response. Here's the post, maybe you can give him a poke:

    https://support.capturingreality.com/hc/en-us/community/posts/115001154171-Generate-Texture-while-KEEPING-UV-s-from-Retopologized-Mesh-?page=1#community_comment_360002948412   

    see DotScot1's posts

    Best,

    Benjy

    0
    Comment actions Permalink

Please sign in to leave a comment.