Draft VS NormalVS High Detail

Answered

Comments

28 comments

  • Avatar
    Wishgranter
    Hi Christian Perasso

    In This case you use pretty strong SIMPLIFICATION, i can say its not work for you to use HIGH recon mode, NORMAL mode should be OK for you use case. HIGH recon mode recover more details in general ( sharper results )
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    To add my 2 cents:
    Even from 100 to 5 mln is such a tremendous simplification, that I don't think it would matter much if you started from 300... :-)
    Why not try it out and post the results?
    0
    Comment actions Permalink
  • Thanks you for the clarification, i understand that the simplification made by RC is to all the assets, without any distinction between plane sor surfaces and objects.
    For our needs it would be great if RC simplification would work in a different mode, because in a linear piece of tarmac we don't need much details, in the curb and in various other objects, instead, we need as much details as possible.
    I know that it would be va ery heavy calculation, but I think that a selection tool in RC would be perfect to select zones and apply more or less of the simplification desired.

    Thanks a lot for the comments.
    For now I'll go with the normal details, maybe in future I'll make more experiments and I'll share some info about this topic.

    Christian
    0
    Comment actions Permalink
  • Avatar
    chris
    even if you are going to simplify all the models down to 5mil.

    the high detail version will be better.

    but depending on what your doing, it may not be that much better than the normal and it will take a lot longer.

    so its worth trying with the normal version first.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    @Christian:
    What you are looking for is ZBrush. I have been looking into it myself a while ago and was quite impressed. It's not too expensive and one can get results within a day or two. There is a tool that does exactly what you are looking for - you can color the high detail mesh in areas where you want less simplification. I can't imagine RC implementing something like that - you can't have everything in one program and RC is doing what it is supposed to well enough. There are also many core features that need improving... :-) I personally decided that the in-program simplification is actually doing not bad at all - it doesn't just smother everything. Edges are preserved and plain faces reduced more, within a certain range of course. I t also depends on the object. In your case I wouldn't worry too much. You will have to try a bit for yourself, simplification is not a precise science... ;-) If you have access to Agisoft, I find their simplification tool is not bad either and differs a bit in the resultiong details.

    Just out of curiosity: What accuracy exactly are you looking for? Are we talking about centmeters or half meters? Since it is - I presume - a dirt track and will be shifted slightly with each race, I can't imagine you want to be on the cm side. How big is your area and how much is covered by one image, meaning what measurement does one pixel have aproximately? Did you use drones?
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    @chris:

    Why do you think it matters with such a rate of simplification? Are you talking about sub-pixel accuracy of individual vertices or overall geometry? Do you have examples?
    Doesn't it depend on what you are trying to do and how you want to achieve it? I guess that if you have a fixed set of images that you have to work with (aerial images for example) and then need to squeeze the last bit od detailing out of your material, then high detail is probably the only way to go. If however you want to scan something close range where you can go back and take more images if you need, I think that is also a possibility and doesn't necessarily require high detail calculation. I have tried that on occasion and the result in my cases was that high detail spews out WAY too many points for my needs and also takes WAY too long. Normal Detail is also pushing it sometimes in my opinion. My examples are architectural features of medieval buildings, e.g a base of 50 cm diameter and I don't even have to fill the whole image (10 mp) with it to get results that are good enough to be able to measure the moulding accurately.
    But I am keen to learn more! :-)
    0
    Comment actions Permalink
  • Thanks Gotz, we use Zbrush,Max,Marmoset etc. but to export a 300Mln model is very difficult for space and time reasons, without metion the time required to open the OBJ.
    I think is better if i explain to you a little more of what we do with RC:
    -we use a couple of Drones to take aerial photos
    -we reproduce Motorcross and superbike tracks by that
    -for Motocross we simplify the Normal detail model to 4/5MLN and calculate a Heightmap
    -we put the heightmap in unreal and we model the props, building etc in substitution of the photogrammetry ones (not exactely but i think you can immagine the process)
    -for the superbike tracks we want to do the same

    Photogrammetry is very very good for motocross and we don't need much more, but for the tarmac surfaces,we have problems.
    Laserscan data are better in planes but they retain so much details that, for us, it's a waste of data, although we need precision at 4/5cm (laser data goes up to few mm).

    Beeing able to reproduce plane surfaces with much less details than other parts of the tracks would be perfect for us.

    Also i'm keep learning, and your comments help me a lot.

    Cheers,
    Christian
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hi Christian,

    so you seem to be better equipped than I am! And probably more experienced as well... :D
    But I am glad if my rambling helps!

    So do you have difficulties with the tarmac due to uniform color and that is why you need higher detail calculation?
    Then you might try to follow the track at a lower altitude next time so you get distinguishable texture?

    As I understand it, you are not primarily interested in the polygon or vertex model but the heightmap, which is nothing else than a raster image, right? So your problem is not the high polygon count per se but rather that it is one of the steps to achieving the heightmap?
    So what you really need is software that can produce a heightmap from a large point cloud, right? I never used CloudCompare, but as I understand it, that is one of its features. Have you tried that yet? I looked real quick and it says it can do 2 billion. No idea about the performance though...

    Good luck!
    Götz
    0
    Comment actions Permalink
  • Hi Götz,
    i'm really a newbie on RC and the other different softwares, I'm working with it from a couple of months.
    But I learn quickly and I fly with drones a lot...
    Yesterday I made the track of Imola in Italy, I worked all day to stitch all the photos (made with dronedeploy and a little parts with altizure) but I have some banana problems:-))
    Sadly the profile of the phantom 4 pro I use is not supported by RC and instead of getting a linear model I have a broken one.
    Maybe I have to set some parameters and tomorrow I'll make a new post with results and request for more info.
    For now I can tell you that PS worked better with this kind of asset, Imola it's a 5Km long.
    We use almost everything, PS, Zefyr, Cloudcompare etc...and I think that different assets could give better results in one software or another.
    For example motocross track with a total area less than 1Km squared is very good with RC not much different from PS.
    Maybe I made some mistakes and tomorrow you'll read about it, but I'm here to learn and find some help.
    P.S. I use 24 CPU 3.4 GHZ, 2 quadro 4000 and 128 GB Ram, but I have some issues with the pagefile.sys, it goes way too high (yesterday 330GB), tomorrow I'll make a post also to know more about that...
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hi Christian,

    that kind of problem (not properly aligned in Imola) is usually Wishgranters speciality to help with. I wonder why he hasn't posted anything since a couple of days. That's not like him at all - maybe he's just taking a break for once... :-)
    Yes, there seem to be specific setups where one software is better at than another. I don't think it is possible to predict it though. What I came across recently is that somebody suggested to fly at different altitudes and also take angled shots as opposed to only straight down:
    chris wrote:
    are you shooting straight down?

    I'm find shooting at down 45 degree's with phantom 3 doing an orbits works pretty well. rather than shooting in the classic grid pattern.

    I did one today that worked well, only around 250 photos though, got a high detail model in a hour or so.

    now I'm just trying to add another 10000 ground level photos. and then I'll have another 2000 of so heli shots from previous shoot to add after that. I'll see how i go with all of these, but I'll probably run into issues.

    and the following posts...

    Also the pagefile has been discussed in this forum to some extent - if you haven't looked already...

    Wow, your hardware is quite something - like a new Ferrari compared to my old Fiat... ;-)
    0
    Comment actions Permalink
  • Yep we are using two kind of photos, the zenital and the angled ones.
    We took 2500 Images and at different heights, but RC gave us the banana model.
    PS worked perfectly with them, today i'll take some measures in the program and i'll compare it withthe one Imola gave to us.
    I'll let you know the accuracy.

    I'll read the previously post on the pagefile.sys, thanks for letting me know.
    Best,
    Christian
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hmm, weird.
    Is this error marginal or really obvious?
    Did you try to use calibration group for identical lenses?

    Btw, you should probably open a nex topic, because we are way off the title of this one... :-)
    0
    Comment actions Permalink
  • Avatar
    chris
    about the simplification levels. and detail settings.

    I did quite a bit of testing. but each case will be different.

    I was shooting with dslr from heli. so i was quite far away, and needing all the resolution i could get.

    I found i was getting better building edges and cleaner models after simplifying from a high detail model vs normal.

    but i was also getting really long processing times, in the 1 - 3 week range.

    but its going to be best to make sure all your alignment is working well before you try running it out on high detail. otherwise you can just end up wasting your time.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Chris, do you mainly do aerial stuff?
    Seems like the scenario I described where you just don't have many options to take additional images.
    Because in the end it is a question of resoltuion. If you (can) make sure it is high enough, then the inaccuracies of Normal processing can be kept below your indendended threshold.

    With my projects - often highly irregular historic buildings, it is quite hard for me to predict alignment quality or rather find out where there might be something amiss. Of course, the inspection tool is incredibly helpful in that respect but it does not show actual errors in alignmen, which sometimes happens in some obscure corners. Do you have any idea how to pinpoint possible troublemakers easily?
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    My understanding is that RC uses adaptive subsampling during Simplify, as does ZBrush with Decimation Master, as well as with ZRemesher when retopologizing to transfer detail hi to lo. My work is focused on extensive interiors in rock (caves), but I wanted to compare the results of workflow through ZBrush v. simplifying in RC, differences weren't night and day and question whether it's worth the trouble round tripping through ZB. I'm clear this isn't a one size fits all, agree there's a key difference in shooting from some distance and gleaning all the high-frequency detail out of imagery, versus working close to your subject, as I am, capturing 52 MP images from 2-6 meters.

    For my comparison test I captured a chair with highly detailed wood carvings, contrasted by broad smooth sections in the upholstry, making it easy to see what adaptive subsampling was actually doing to preserve detail in the carved parts while minimizing polycount across the domed cushions. The snip of the scene in UE4 doesn't convey what's needed, but flying around the chair up close from one to the other, I can say it's really hard to see much difference. Reconstruction in High produced 300 M tris, the ZB chair on the right is 700,000 tris (from 350,000 quads), the simplified (uncleaned) RC chair on the left is 500,000 tris. Still a heavy asset, but that wasn't the point of the test.

    As for how smart the adaptive subsampling worked, I've not tweaked ZB, but out of the gate I'd say it threw less detail into those cushions than RC. The ability to tweak would seem critical to optimizing a mesh as the world isn't one-size-fits-all in what it dishes up in 3D capture.

    Chairs_AB.JPG


    Reading the above posts and weighing against my own setup, it does seem that working with high-resolution images from close range then using High quality in Reconstruction is overkill.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hi Benjamin,

    nice work!
    Also thank you for the test!
    Could you upload an untextured screenshot, so that the polygons are visible?

    I also have a feeling that using Zbrush can be really good for extreme situations, but might rarely be worth the effort. Of course, the internal simplifier in RC (or others) won't do as good a job as a carefully prepared model in Zbrush, but I think good enough for many cases.

    And I say it again, from 300 mil to about 600.000 is a tremendous step. On average, that means that 500 polygons will be merged into one! Imagine how much of the small detail will be lost.
    The whole thing might be different, if one tries to stay within a certain limit. From what I read about the subject I thought that factor 10 is pushing it already in terms of ruining too much details. And especially if you cannot reduce too much polycount, the quality of the algorythms should play a larger role than with extreme examples because every polygon counts. And if the algorythms leave too much details on even surfaces, the polygons will be missing in the details.
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Hello Götz,

    I lied, RC mesh isn't 500,000 tris, it's 1 M, but that wouldn't explain the lack of much difference in the outcome. Of course, I've not pushed this downward to a really light asset, like 50,000, which in a big set is where things really have to perform. I'm attaching a couple screen shots of wireframes, here's the 700,000 faces mesh from ZB:

    ZB_wire.JPG


    And here's the 1 M tris mesh from RC:

    RC_wire.JPG


    Clearly, way more can be done to reduce polys on the cushions, not to mention elsewhere in the carved parts.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hi Benjamin,

    thank you!
    Would be interesting, to do another one with RC to 700 as well, from the original of course.
    Because to be honest right now I like the RC one better - the detail seems to be crisper.
    And ZB doesn't seem to have so much fewer on the cushions either.
    I wonder if the differnece is the structured vs chaotic distribution.
    If you look for example at the edge of the seat, the difference between flat and carved (tiny nobs) areas is really pronounced, whereas ZB has more or less the same density...

    Nevertheless, they are both really good - how many images if I may ask?

    And honestly, if you need to downscale them even further to 1/10, I really don't think we need to talk about normal or high anymore, but rather preview or normal... :lol:
    Because in the end the main difference between the three is the resolution: 1:1 in high 1:2 in normal and 1:4 (I believe) in preview. So as long as there aren't any other corners that are being cut, that is it. And if, like in your case, you start with an incredible resulution, you can very well deal with a little bit less... :-)

    Could you add a preview shot to the selection?
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Götz,

    167 images on the chair. I'm not sure what you mean with "a preview shot of the selection".

    There's something else here which I'm told by someone much farther down the road in UE4 needs to be considered. According to him, "One of the issues with the tiny UV islands isn’t an issue for the texture but with the lightmap and baked lighting. When setting the object to static and using static lighting, those UVs can create incorrect or missing lighting if used in the second UV channel."

    I've not seen for myself what issues this causes in "the second UV channel", much less why one needs a second UV channel, but not to jump into that just now. Would like to settle these issues first to simplify the conversation. I'm tasked with some work before I can run your proposed test, but can do.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Sorry, I meant can you do a reconstruction on Preview - I would just be curious if there is much of a difference...
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Hello Götz,

    Quite revealing running Reconstruction in Preview, check out the spread of polys, quite heavy on the back cushion compared to teh seat cushion. Also, note the carved holes are largely filled in. Some aspects are quite acceptable, which tells me Preview could in fact be useful for very basic shapes.

    gothic_preview_01.JPG


    gothic_preview_02.JPG
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hey Benjamin,

    thanks again!
    Very interesting indeed.
    I think that the polycount is quite close to your simplified examples.
    It does however mean that there must be more differences between High, normal and preview than just the resolution.
    And if you look closely, there are unintentional spikes on flat surfaces, which influences the appearance quite a bit.

    In the end it is a matter of scale - do you want a closeup model with all intricate details or is it a prop in a larger setting.
    And if you squeeze that last one into 50.000 and compare it to another 50.000 based on a high detail reconstruction, I bet you have to use software to visualize the small differences...
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    Ok, people.

    Lets imagine 3 different dataset. 1st from Nikon D810, 2nd from Canon PowerShot G7 X and 3rd from Samsung Galaxy S7.
    All have datasets have camera in same positions. And the same image resolution, lets say 12Mpx.

    How work modes in RC:
    Draft - 3~4x times downscale images, minimal mesh optimization.
    Normal - 1~2x times downscale images, mesh optimization and refinement.
    High - No image downscale, maximum mesh optimization and refinement.

    So on default settings, there is no question about Draft/Normal/High, Draft use 3-4 times smaller images for Depth maps, Normal - 2x smaller, High - Full resolution.

    All datasets except Samsung one will have more details and as result bigger mesh count in High comparing to Normal. Samsung probably near the same :)

    If you change default settings to Normal and disable downscale. There will be a different result.

    Probably only Nikon D810 dataset will have much more details in High due to sharp images and no AA filter. And more mesh calculation steps comparing to Normal mode.

    G7 X Probably will have the same amount of details and probably a bit bigger pol size. Actual details will lost due to lower amount of details (AA filter, ISO noise, etc, chipper camera).

    Samsung dataset in High mode just will have increased number of poly without any real details.

    So if you have sharp, clean dataset with good overlap, good textures, low reprojection error after triangulation etc. And if you need maximum possible details, you can use High.
    If you use middle class camera, often making errors in overlap, dof, etc. Normal with disabled downscale is enough.
    If you use smartphone camera, than you should not use High because it not give you more details.
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Thanks, Vladlen, that all makes perfect sense. I'll start paying attention to the reprojection error after triangulation to compare against various shooting conditions in better training my eye how best to shoot.

    Götz, I've only run the chair in High and Preview, might be interesting to now see what difference there is in Normal. The issue with holes in the wood carvings should present a difference, given my 52 MP Sony A7rii shot the chair from 1.5 m will plenty of light to sustain high dof.

    Make real hay in the meanwhile, I'll follow up.

    Many thanks, both of you.
    Benjy
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    Hi Benjamin.

    If you can show camera placement for this chair scan and some images (crops with only chair is enough), i probably can tell you the reason of this problems.

    As i can see your raw mesh, for this moment probably 2-3 reasons:
    Wrong acquiring (some surfaces have angle 45degree and more on images) or not have good coverage and overlaps.
    Reflections or light spots on reflective surfaces.
    Or surfaces with weak textures shot too far from them.

    All this can be a reason that some areas on mesh have huge size polygons or holes.
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Hello Vladlen,

    On the road the last couple days. The attached snip shows the convergence angle between cameras isn't near 45 degrees relative to the chair, which in most of the wide shots upper three rows fills the frame. The two lower rows I move closer and shoot landscape mode, the whole lower part shows in each frame, cropping the top. Then you see the much closer stills from the front and the back to capture all the detail of the carved wood left and right of the back panel. I checked critical focus throughout. No shiny spots, light is cross-polarized for pure diffuse.

    Gothic_Cams.JPG


    As for what geometry RC is able to glean from the photography, I was kind of shocked to see an extremely subtle difference in how the threads of a given color in the woven design sit lower than neighboring colored threads or thread types:

    Gothic_weave.JPG


    One mistake, I should have moved in to the close ups more gradually and/or worked the close ups around from the front to the back and back to the front, the model suffers from one real flaw - the lathed finials upper left and right have the front half slightly offset from the rear half. The wood contains plenty of grain in the texture for the engine to grab onto, but there's nothing relating this detail around this symmetric shape around from front to back on both sides to model it properly.


    Best,
    Benjy
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    Hello Benjamin.

    I did not see any camera from top. Them just not seen on screen, or you don't have them?

    Anyway, i'll show some schematic images how camera should be.

    First of all, we must remember, that photogrammetry required not silhouette of object but surfaces and details (textures).

    Now lets imagine simple chair:
    It have surfaces that points top, sides and bottom.
    Capture1.JPG

    Capture2.JPG

    Capture3.JPG

    So we Must Have at least 1 image directed toward to the surface.
    Capture4.JPG

    And not less than 2 images in 10-15 degree to first camera.
    Capture6.JPG

    Central camera will give you perfect texture, other two with will give clean Depth maps (and later clean Dense Clound) for this Central camera required for calculation 3D topology.

    And this must be for Every surface you want catch! Every surface in ideal condition must have 3 shots.

    But if we have surfaces that attached in high degree (90 degree like in example) we need additional images shot for "stitching" Dense Clounds in angles between main camera triplets.
    Like this.
    Capture5.JPG

    So "final" scheme will looks like this:
    Capture7.JPG


    So we already have 15 camera only for 3 surfaces!

    Ok, in real world with good camera like Nikon D810 and good lens we can "cheat" and use only 5 camera.
    But for this example with 3 surfaces at 90 degree all, even from D810 result will be not perfect.
    So i can't recommend shoot less than 11 images

    Capture8.JPG


    or this will be not enough data for clean depth maps->dense clouds-> mesh, textures, and as result final topology will have less details or will have problem in topology (especially if object have weak surfaces).

    And now if we see any nice object that we want to scan, we can plan where and how many images we should have for clean topology and textures.

    Also we should remember about real camera and lens. Them can have DOF, aberrations, non linear distortions (last two problems common for area near corners and edges of photo). So real, good data from image is about 75-80% (sometime less) in center of image. And all this can required additional images for good 3D reconstruction.
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Thank you Vladlen,

    Nice job breaking that down to ideals and illuminating the issues with lenses. I need to modify my thinking when considering how I break an object down from the macro structure to the micro scale of finer details, do a better job providing transitional shots to move from one to the other. When working on a piece of furniture like this, even a slightly converged view of the seat provides plenty of texture data, no need to shoot directly from top, especially given my imagery is 52 MP. A chair is one thing, but I try to maintain a 2 meter distance to the macro structure of cave walls when capturing environments. So much is going on in a crazy environment, model coming along nicely, but definitely need to take more photos, spending more time than should be necessary setting tie-ins.

    Big thanks for your guidance!
    0
    Comment actions Permalink

Please sign in to leave a comment.