missing parts of object with High Detail reconstruction.

Comments

23 comments

  • Avatar
    ivan

    May you upload or link to a full res image that is being used ?

     

    I  fear however that the issue is due to the type of object you are trying to capture - smooth featureless surfaces.

     

     

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    Hi Ivan,

    Here's a link to that image: https://www.dropbox.com/s/hivsx8qhji66szq/DJI_0297.JPG?dl=0

     

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    For completion sake, here is what the model looks like post texture and color:

    0
    Comment actions Permalink
  • Avatar
    ivan

    You certainly have a good amount of photos captured, some vertically down from the drone would also assist in the future, maybe in a grid type pattern.

    Looking at the source image, I'd suggest the issue you are having is a combination of things.

    From the textured render you show, the bits that appear to be failing the most are the flat textureless surfaces.  - This is a limitation of photogrammetry and not application specific.  (giving the objects texture, stickers or something can help at times)  or a laser scanner......

    You did the correct thing by getting the whole scene in frame in a 360, this is great for registration.  
    However the subsequent rotations are hardly any closer, (a limitation due to the wide angle lens I expect)
    The full res image you provided actually shows only 1/2 -1/4 of the frame has the data needed.  so much of the potential image is discarded.

    As a example one area I can see it struggled with where 2 flanges connect with a cross over.

    This is what the software actually has to work with,  we can see the detail and quality is not present.  noise and jpeg artifacts will also be causing problems.   You can now understand why it struggles.   

    From a distance the sample image you uploaded looks fine.  Your colour render from a distance doesn't look too bad either...   


    It is likely that if you took images closer up of the various parts in addition to the overall scene it would have helped.  

    This could be done by getting the drone closer, and regular terrestrial images.

     

    Things can be tweaked in software however to help account for such things, I'm not the best person to ask.   At a guess try increasing the alignment max features - 80k. set preselector 20k and setting detector sensitivity to ultra.   (numbers picked out of the air)

    Also you may get better results with the normal detail than high.

     

    To conclude the amount of images you have = good accurate registration,  however the lack of effective resolution in the images = poor detail.


    Keep us updated.

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    Thanks for the response!

    I'm going to try to redo this with the settings you mentioned (question: which did you mean to change to 80k:   alignment max features per image or per mpx? )

    I have some images that were taken directly from above the site at various altitude, however, the lowest is at 100' - I don't have a lot of things I can use as control points to connect the components but I'll give it a shot.

    We also have taken shots with lidar (BLK360), but have had issues getting those to work well.  The only way to get the content off of the device is to use their partner company software, and then export from there.  I believe that we exported to e57 files, but they don't look great in RC - if anyone knows of a good workflow to get BLK360 content into a good format to be brought into RC, I'd love to hear it.

    I'll post the results I get when I run the above tweaks as soon as I have them. 

    0
    Comment actions Permalink
  • Avatar
    ivan

    The alignment - That should be double the default. for max features, sorry I should have been more clear.

     

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    Tried with the alignment max features = 80k, preselector = 20k and detector sensitivity to ultra.  With Normal reconstruction, there was no appreciable difference that I saw.  

    I'm going to rerun at the High detail and see if that produces better results.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hey John,

    I mostly agree with ivans detailed assesment. However, I think that the detail in the pipes shown in the close up should be able to get you something at least, if not perfect of course. Missing parts of the pipes point to a problem of alignment in this area.

    Have you tried exif grouping yet? As a general rule, if you are trying to improve alignments you need to delete the flawed components because otherwise RC will use those as a starting point.

    Using Control Points might also help a great deal - I managed to get some cables on a wall modelled by doing that. They don't have to be on the pipe directly, just in the vincinity to help the alignment along.

    I am wondering about the orange lines between some of the cameras in one of your screenshots. Is it possible that this is due to GPS coordinates? Because those have been known to cause trouble sometimes, so you might want to consider switching it off or deleting the data from exif (only in copies, naturally).

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    I hadn't thought of trying control points for the missing sections.  I had only used them to connect different components.  It sounds like you're suggesting adding the photos >align>add control points across a few pictures for areas that were missing> align again.  Does that sound right?

    I'll also do a test with the exif data deleted.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hi John,

    I meant only the GPS part...  :-)

    And yes, that's the workflow I meant. If it does not help as much as you hoped, then you can delete all components and align from scratch. As I already pointed out, that often works better since there is nothing old (and flawed) for RC to start from.

    A good indicator is if you get high errors in placing the CPs, since it confirms an alignment problem.

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    Hi Götz,

    Yes, I only removed the GPS data :).  I decided to start from scratch and added a bunch of control point which helped after rerunning the reconstruction overnight.  Unfortunately, that led to the white box having half of the front and top missing, so I added some more control points for all the surfaces that were not looking great and am rerunning since about 10am PST.  I should have something to show in about 2 hours (2:15).

    Related, I was looking at other parts of the UI and was thinking about Image>Show Matches.  When I turn it on, I noticed that a lot (if not most) of the match points between two given photos seem to be the gravel ground around the unit. 

    I'm wondering if it would be better or worse to select a reconstruction area that doesn't include the ground?  Any thoughts?

    0
    Comment actions Permalink
  • Avatar
    Richard Alan Vincent

    Hi John 

    I am just getting started with this software to process data captured -

    The colored model looks great -- 

    I would be interested in knowing the system used if you can share that --Thanks

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    Here's what I ended up with. 

     

    You can see the sides and top of the white box, and the cylindrical tank aren't coming through.

    I still need to try to get the unit from the ortho photos we got, but the closest is at 100', so I'm not hopeful.  Also, if I can figure out how to get the laser scans off of my BLK360 and into RC, I may have a chance.  Otherwise, it Looks like I may need to go reshoot. 

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    still curious if I made the reconstruction area exclude the ground, would RC find more matches in the object?

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hi John,

    yeah, unfortunately it is often like that - improving one part makes other parts worse. I can be kind of a vicious circle leading to dozens or even hundreds of CPs. Lot of time and effort, but sometimes the only way if you can't get more images. At least you got the pipe on the top now - I'd be surprised if that part would get much better with the images you have right now.

    I am pretty sure that leaving the gravel away will not make things better, since that is only the reconstruction stage, but the key is the alignment.

    You could play with the following settings:

    Preselector: the max number of points that will be used for matching or creating tie points, it should be 1/4 - 1/2 of Max features [https://support.capturingreality.com/hc/en-us/community/posts/115000782031-Points-count-and-Total-prjections]

    Max features per mpx/image: e.g. 20k/80k - more features will be detected (only if enough of the defined quality are present)

    If not, you can set Detector sensitivity to high, which will allow for more but also less quality features

     

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Oh, and reshooting will almost always be quicker and more efficient...

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    Ok. I'm trying to get access to the site to reshoot (not a small task unfortunately).

     

    When I reshoot, I need to make sure we capture closer to the object with the drone, and also get images from the top (ideally a dome around each object starting at radius about 5' away from the object), at multiple elevations. 

    If I can, i need to put some features on the large areas of uniform color (white box, grey panel.)

    Question about this - how much do I need to cover the whit box with chalk/dirt?  Is a single diagonal line on each side enough?

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hi John,

    sorry for the delay - we had bank holidays here...

    The distance to the object depends on several factors - target resolution (mesh/texture) and the size of its features. You can't expect the geometry to be perfect at a pixel level if you're not an absolute Pro. So make sure you capture quite a bit closer than you will need - because then the errors will disappear through simplification. If you for example want all the skrews on a pipe connector to be modeled properly, you need to shoot them from close up.

    I am pretty certain that the pipes have enough features all by themselves. If they are not brand new, the fact that they are outside should be sufficient. The probably have spots and scratches all over the place from rain and animals etc. So you just need to get close enough so that they are distinguishable on your images. If that is too much effort, you need to cover them (as you said) to achieve just that. I find that seams or shadow due to small recesses is often enough for RC to latch onto something. So the density of the smudges needs to be so high that there are no significant unicolor areas left.

    Remodeling might also be an option though....

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    Thanks for the tips.  The pipes are actually pretty new which is likely part of the problem.  

    What do you mean by "Remodeling might also be an option though..."?

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    I mean that you could use what you got and re-create the geometry with a different software, then import that into RC and texture it. From your question I gather that you are not experienced doing this so it would probably take much longer than re-shooting - though without the organizing :-)

    There are also spraypaints available that can be easily removed afterwards (I think they are also water soluble), which is probably quicker than using chalk or stickers. No idea where to get them though...

    0
    Comment actions Permalink
  • Avatar
    john_ondaka

    I am pretty new :) 

    We did run these photos through a different software and got results that looked better, but 1. that tool is cost prohibitive (not to mention seems to encourage lock-in into their suite of software), and 2. the detail up close wasn't very good.

    I'll see about importing their geometry though - hadn't thought about that. 

    Spray chalk is something we looked at but we have to be very careful with aerosol cans at this site.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    I always imagined that normal wallpaint (diluted?) in a pump spray can should also do the trick...

    Maybe you can export rectified images from the other software? I did that once and it helped a lot within RC. You won't be able to get more than you see on the images though, so don't expect miracles!  :-)

    0
    Comment actions Permalink
  • Avatar
    jason.hunter3

    Hi John,

    I'm a bit late to the conversation, have you already resolved your issue?

    I've had great luck merging scan data off the BLK360 with photo sets. I did export e57 files from ReCap then imported those to Reality Capture. You need to make sure you export 'COMPLETE' e57 files so that the scan data is pre-aligned before it comes into RC. Once in RC we've found adjusting the 'Minimal Distance between two points' in Reconstruction Settings to 2mm allows you to align the laser scans with photos. 

    See this thread where my colleague Spencer discusses the problem and it is resolved by Cabral.

    I also have further documentation of our workflow for BLK360 > RC so let me know if you need support!

    0
    Comment actions Permalink

Please sign in to leave a comment.