Notice: This page will be redirected to the Epic Developer Community. Spend a few minutes to get familiar with the new community-led support today.

simplify emphasising surface noise...

Answered

Comments

22 comments

  • Avatar
    Götz Echtenacher
    Did you do a separate simplification from the highest to the lowest?
    Could it be a display issue?
    How about smoothing it in the highest resolution before simplification?
  • Avatar
    Jennifer Cross
    Götz Echtenacher wrote:
    Did you do a separate simplification from the highest to the lowest?
    Could it be a display issue?
    How about smoothing it in the highest resolution before simplification?


    Good morning!
    Each simplification was done directly from the highest def model.
    So in this case it is just about/testing/abusing the simplify process/code.

    I have other battles to fight with the smoothing parameters for my work - but let's talk just simplify here
    Could it be some aspect of it trying to do feature preservation is emphasising any camera misalignment? These surfaces are very feature rich (underground mine walls) but not very colourful. I would love simplify to find and preserve seams... but we're losing what seem to be smooth-ish surfaces in the high poly version (amplifying surface deformity would be bad when our next step is texture map)
    Or I could just be doing it wrong :)
  • Avatar
    Götz Echtenacher
    Good morning to you too!
    Ah, Australia, that explains the good morning in the middle of the night... :D
    I hate such cases as well.
    Do you have any sample images?
    And maybe a screenshot of the tie point cloud with cameras activated?

    Have you tried decimation master in ZBrush?
    Also, Meshlab offers a huge array of possibilities, as well as cloudcompare.
    In the end, it is not so much different from filtering laser scans from scanners, most of them also have a certain noise.
    Anyone experience with that?
  • Avatar
    Jennifer Cross
    Yes - far side of the planet in Perth WA

    Usually when I simplify with other programs the surface become plainer (removes features as we remove points).
    Good ones preserve creases and tend to surfaces (some of the meshlab filters act this way).

    Hence my question about the RC simplify.. it's not acting the way I would expect, so either I'm doing it wrong (very possible) or there is something funky going on to make the surfaces bumpy(ier) during a basic reduction.
    Hence why I posted the 3 clips of the same section of mesh. I need different performance level models (webgl, VR, CAE) and the raw mesh looks pretty nice. Is the workflow use simplify going to reduce mesh counts, make the surfaces larger and then I cover the change with texture ?
    Even better if I could get the software to compare the 35m mesh with the 150k mesh and produce a bump map! I'll have to put that in the feature request section once we decide how to make the 150k surfaces reliable. Maybe I have to make then smooth the low poly count meshes? If i do that, am I just smoothing over a bug that makes noisy meshes?
    But people love the models... work work...
  • Avatar
    Wishgranter
    Hi Jennifer Cross

    you looking on mesh with not ideal normals display ( simplified model ) try use RENDERING tool and set there
    TYPE: SHADED
    SHADING STYLE: FLAT or FLAT WIREFRAME

    and compare the results.

    At least form my view the noise on model is because missing features ( flat, featureless surfaces ) or misalignment.
  • Avatar
    Götz Echtenacher
    Jennifer, somebody made that bump-map request recently, so you only need to add your vote!
    What kind of filters do you use in meshlab?
    I got kind of stuck there...

    And yes, the alignment is not ideal, but there should still be an explanation for the noise.
    I've been wondering too though if it's just a display thing.
    Sometimes the old point based viewer is still useful. Also the edgy one...
  • Avatar
    Jonathan_Tanant
    I think this is only the color. There is this weird redish/blueish color when you simplify, I think this is related to a default vertex color or something...

    In MeshLab, you can play with :
    Filters->Smoothing, Fairing and Deformation->Laplacian Smooth (Surface Preserve) and tweak the parameters.

    You can also play with the manual Mesh Smooth tool :
    Click on the paint brush icon, then on the water drop, tweak the brush parameters and paint on your model to smooth it.

    Then when you have a smooth (but preserved) model, you can simplify it with Filters->Remeshing, Simplification and Reconstruction->Simplification:Quadric Edge Collapse Decimation and tweak the parameters.

    Once you are done, export your mesh and reimport in RC.
  • Avatar
    Götz Echtenacher
    Hey Jonathan,

    thanks a million for that rundown!
    I was not aware that Meshlab has a smoothing brush!
    That would mean it is a real ZBrush replacement...
  • Avatar
    Jennifer Cross
    Wishgranter wrote:
    Hi Jennifer Cross

    At least form my view the noise on model is because missing features ( flat, featureless surfaces ) or misalignment.


    Ok I dug in and looked at the high res model in super close and I'll go with camera alignment because the surface is really spikey even in areas that should be smooth (this is underground blast faces, so the surface is rough, complex and feature rich by itself)
    In my high res component report it says :
    clip.png


    Which from what I could read in the Help should be ok... All 28 images have 100000 points, and the camera poses are all around the 48000 tie point mark (lowest was 31k, highest was 51.5k). Alignment downscale was 1, max feature error=2.
    So, suggestions on what I should to to improve the camera positions?
    (I'm sorry but I can't share the photos or much of the textured image... mining process data is confidential)

    In the meantime I'll meshlab the surface smooth but obviously this is something I should fix in my RC workflow.
    Thanks too for all the suggestions on using other packages and the settings there - definitely helpful as I get deeper into the projects.
    Thank you
    Jennifer
  • Avatar
    Götz Echtenacher
    I guess 1 image with a very small area could hardly be identified by anyone who has no inside knowledge?

    Some areas seem all right.
    I could imagine either not enough distance from 1 image to the next (not enough base) or too acute angles.
    Did you shoot head on or along a shaft?

    If the circumstances are not ideal, I think this kind of misalignment is hard to avoid.
    A workaround could be that you take your images so that your image resolution is significantly higher than your target.
    Then the noise would fall below the threshold...
  • Avatar
    Wishgranter
    Hi Jennifer Cross

    Please next time include even ALIGNMENT settings used there ( just larger portion of the setting panel need to be included )

    Try use these settings for proper alignment

    ALIGNMENT_80_40k_brownTT.png


    Can you share the source images ( send it packed on my email please ) and RC project data for inspection ? ( your setting used )
  • Avatar
    Wishgranter
    Hi Jennifer Cross

    Data are not properly captured for photogrammetry, all cams are near each other and some are on same positions so the surface noise come from this camera positions. for proper capture of the cave you need properly capture it, or get in issues like here.
    Screenshot 2017-08-22 08.41.44.png
  • Avatar
    Jennifer Cross
    Thanks for having a look at the project wishgranter...
    So what settings can we use to improve on the surface resolution/camera registration?
    Is it because RC is finding too many fine details? Is downsampling the answer or someway to tell it to only use larger, more distinct feature points? What is the problem with multiple near same shots in the stack? Is there a degree of uncertainty on the camera registration so duplicates end up conflicting rather than contributing? Perhaps some of the settings would allow somewhat duplicate images to correctly align with each other? Picking stable classes of features perhaps? Any chance it is a camera setting doing this? I read somewhere that OIS was a bad thing for photogrammetry, any other settings I can check/ include in the procedures?

    I'm coaching the geologists on the photos required, but given the circumstances, I need to build a workflow that deals with whatever they can get. Often physical access or safety procedures will limit camera positions and lighting is always "interesting". Sirovision's dedicated camera helps quite a lot but they want me to come up with alternative for some of the situations where the sirovision software/hardware isn't available.

    Thank you
    Jennifer
  • Avatar
    Götz Echtenacher
    Hi Jennifer,

    my reply got apparently lost in the ether somewhere...

    Short version:

    - light and texture should be fine, light might only affect the texture-result, but in my cases (similar) it's usually ok
    - as you (and wishgranter said) the bases of the images in relation to the surface are way too small, some even almost on the same spot - deactivating them until only one in each spot is left was the first I did
    - in the future, the images should be taken rather from the opposite face - as perpendicular to the target surface as possible
    - the water and the reflecting metal grid won't do the alignment much good either, you could try and mask those (plain colour is sufficient)

    I tried around a bit, althought the results did not improve by much:
    - deactivate the douplicate images
    - put all the images in the same calibration- and lens group (0,1 or two etc. doesn't matter which, only -1 means each image is being distorted individually)
    - use K + brown 4 with tangential 2 (works best for me with smallest errors)
    - repeat alignment a few times without changing anything
    - when the errors go down significantly, start lowering the max repro error in 0.5 increments until reaching 0.5 or the component breaks up
    - I got 0.6/0.23/0.25 max/median mean errors by doing that

    However, the noise is similar, only there are fewer gaps in the surface...
  • Avatar
    Götz Echtenacher
    Last thought:

    I think for the purpose the result is absolutely ok.
    It's not that you need mm accuracy, right?
    The general measurements should be within acceptable tolerances.
    How do you do the geo-referencing?

    And I forgot:
    At the end, you can ungroup the images again, if auto-focus or zoom lenses were uses (if there might be slight differences from each image to the next) - that might improve accuracy again for quite a bit...
  • Avatar
    Jennifer Cross
    Götz Echtenacher wrote:
    Last thought:

    I think for the purpose the result is absolutely ok.
    It's not that you need mm accuracy, right?
    The general measurements should be within acceptable tolerances.
    How do you do the geo-referencing?


    We are getting pretty good results with the current system yes.. but what lead me to chase this up is the spike/hole pattern is killing my simplify accuracy and RC smooth just makes them weird, and still get massive distortion when simplify. The odd distortions of the surface can hide in the texture, but adding specularity means we don't get nice facets to work with.
    Fortunately I'm haven't been asked to calibrate these yet, but our procedure is the geos mark the faces and the survey team picks up the marks. I've planned to add these back in as gcp when we start combining this with the other model systems.
    For the moment I just need to get good models at low poly counts for webgl and reasonable poly count for VR.
    I'm thinking for the moment I'll figure out how bad the surface distortion is, then make a meshlab filter to smooth out bad registration distortion while preserving the other features (noise reduction for the mesh from the RC misalignment).
    Hopefully some of your settings, or maybe something more Wishgranter can offer to help the alignment system parameters cope with the photos.
    Thanks again for all the suggestions!
    Jennifer
  • Avatar
    Götz Echtenacher
    I fear that this is very close to as good as it is going to get with the image set.
    The only other thing you can try is adding Control Points like crazy.
    But that can easily get out of hand and needs a lot of tweaking to really improve things.

    One last thing: If what you mean by distortions is the weird greyscale stuff on the model within RC, try to set it to Solid in Scene Render - I find that much more suitable for judging surfaces than the Sweet setting.
  • Avatar
    Jennifer Cross
    Götz Echtenacher wrote:
    I fear that this is very close to as good as it is going to get with the image set.
    ..
    One last thing: If what you mean by distortions is the weird greyscale stuff on the model within RC, try to set it to Solid in Scene Render - I find that much more suitable for judging surfaces than the Sweet setting.

    I suspect control points aren't the answer - the surfaces have massive matched details (tie points) between images so the misregistration must be a camera/algorithm thing. The distortions that I'm now talking about are tiny spikes and pits in the surface. If you get the RC viewpoint close to most any surface and look along you can see that there are patterns of spikes and dips, even in areas that should be very flat - for example the anchor bolt plates. These look ok when you texture over them, but make the surfaces lumpy when you smooth/simplify.
    I'll have a try with your suggestions tomorrow and attach some pictures of the surfaces from oblique angles depending where I get to on it. I hope - tomorrow is 360 video and reconstruction demos for the safety management team.. busy busy!
    Thanks again
    Jennifer
  • Avatar
    Götz Echtenacher
    NP, I learn from this too! :-)

    The spikes you are talking about are noise. That is inevitable because the algorythm has very bad geometry to go on.
    Imagine what it means: you have a base of say 10 cm (distance of 2 images) and your surface is about 5 m away (height). Imagine how miniscule the difference between the images must be and you expect a mm accuracy for the Z direction! :-) There are bound to be errors and those errors manifest in noise. It's the difference between the nearest and the furthest of the tiny miscalculations - the real surface is somewhere in between. If you look at it from along the surface and turn it, you'll see that its some kind of fuzzy cloud, probably denser in the middle. All you can do is use some filter that has been made to deal with that sort of noise, but then of course the remaining cloud will be much thinner and theredore less detailed.

    Now I shall leave you in peace! :lol:
  • Avatar
    BenjvC
    Hello Jennifer,

    max feature error=2.
    So, suggestions on what I should to to improve the camera positions?


    Vladlen recently shared a worklfow that directly answers your question. In the Workflow settings use Group by exif data, assuming one lens and camera body combo, this as a starting point for initial Alignment. The default 2.0 setting for max reprojection error is fine, can be turned up to 3 or 4 in cases with problematic photography and when you can't go back. After Alignment, you'll likely see in your largest Component max error at 1.9, just below the threshold you set. Add CPs to orphaned images or to images that diagnostics lead you to target problem areas, e.g. thin regions in sparse point cloud, or more serious, breaks in surfaces. You can up the Weight of these CPs if you're confident of features appearing the same from different perspectives and executing accurately. Running Alignment again until you're happy with a master Component, now turn down the max reprojection error by .5 and align, goes quick, go check max reprojection error, you should see it drop, e.g. 1.9 to 1.4. Rinse and repeat until you get to .5 setting.

    Also keep an eye on the median reprojection error, which tells you how things are going overall, the max may only relate to a few problem children. You can go after those weak spots (if you're good with diagnosing them) with added CPs, and watch the numbers come down.

    At the end, switch the Group exif data setting back, forget what it's called, run align one last time, you should see a final small drop in max and median error. This setting is important for a couple reasons. By assuming a set distortion signature for your imagery, RC calculates this just once with a handful of images and runs with it for the others, it's faster, but more importantly, it's more flexible in dealing with overly converged subject matter, sections of a photo where the math involved to properly triangulate is strained and thus returns higher reprojection errors. This isn't simply a matter of not following best practices while shooting, as some subject matter often forces you to introduce some extremely converged surfaces while getting ample coverage of others. You don't want RC dealing with those problems in the beginning.

    One or more of these images containing highly converged subject matter, if not set to Group by exif data, may align, but may introduce such distortions in the model, e.g. curved surfaces that should be planar, often at the edges of the model, that neighboring imagery can't be tied in, max reprojection error forces RC to break into smaller Components, a wasted effort trying to bring these into a single Component with CPs when you've locked in these reprojection errors.

    This iterative workflow makes it easiest on RC to get a handle on everything, with the opportunity to manually intervene between each alignment to catch problems while they're young, then once you've optimized the model, at the very end you let RC consider each image individually to eek out that last bit. Even if you use the same lens and body for all your work, changes in focus and aperture will (slightly) affect distortion. Also, no two lenses are the same, manufacturing tolerances don't consider (and can't afford to) how one image from one lens relates to imagery from another lens, not at this granular level. I learned this testing "matched" Zeiss primes during the 3D days, and photogrammetry is yet less forgiving than what your brain does accommodating imagery from both eyes.

    In your case with fine details in the wall, ratcheting down on how accurately these are modeled I expect would benefit how in turn they're affected by the Smoothing and Simplify tools, worth an A/B comparison test. Do share.

    Best,
    Benjy
  • Avatar
    Jennifer Cross
    Wow Benjy - Thank you for so much detail and a lot of really good suggestions for the workflow.
    I've taken your suggestions and ran with them for a bit...
    I hadn't realised re-running alignment would incrementally improve the results. (But should have when thinking about the prior pose data). Neat feature - well worth doing but didn't help my data that much due to the poor quality of the baseline.
    Upping the Component max error was an interesting exercise - this seems to filter out all the features the software doesn't a good "lock" on. So I boosted my feature count and used the error filter to help select best (most distinctive I hope) features.
    Lowered the threshold and aligned a few times with better alignment.
    Lowered the threshold a bit more and suddenly my alignment results got much better - Mostly because the model lost a whole area of alignment! (Couldn't align any points in a whole section due to the short baseline)
    Fair enough

    So go back to a good state with best report I can get while retaining the model and think maybe I can isolate a couple of especially poor photos. Turns out the time in the workflow when you can pick points/find camera is really limited. Miss your opportunity and you have to align the cameras again to get the control working (why can't we pick a point or vertex anytime and backtrack the contributing photos? Just wanted to isolate some of the high/low points then disable their camera(s).

    Good tools in RC for diagnosing the results - just have to know when they are available and where to use them!

    Thanks again for the workflow suggestions everyone.. So many options to find and tweek.
    Jennifer
  • Avatar
    Götz Echtenacher
    Jennifer Cross wrote:
    why can't we pick a point or vertex anytime and backtrack the contributing photos?


    You mean select points in the sparse cloud with Points Lasso or Rect(angle) so they are highlighted in orange and then press Find Images?
    That's already possible... :D ...just not with the mesh...

Please sign in to leave a comment.