Notice: This page will be redirected to the Epic Developer Community. Spend a few minutes to get familiar with the new community-led support today.

How to Check which Models are Correctly Reconstructed

Answered

Comments

22 comments

  • Avatar
    Lucia CR

    Hello Ellison,

    what do you mean by a totally different model?

    For general basic assessment, have a look at the Assessing Alignment Quality tutorial in the Help section of the software. You can also check integrity of your models by clicking the Check Integrity button in the Tools part of the RECONSTRUCTION tab.

    For assessing measures, I recommend you to see these 2 tutorials:

    Scale the Scene

    Geo-reference the Scene

  • Avatar
    Ellison Castro

    Hello lubenko,

    Thank you for your advice. I meant by a 'totally different model' is a model which has different measurements than the previous models. Sometimes the model will have areas which are blank, which were not blank in previous models, and sometimes the model is laid on the grid and sometimes it is not.

     

    I have seen the two tutorials, but I still prefer to use them as is. I did try to georeference the scenes before but the measurements are a bit off than what I expect, so using the non-scaled measurements are more usable to me.

  • Avatar
    Lucia CR

    could you please provide us with any screenshots of the blank areas, so that we can better understand what you mean?

    the default orientation of your models is highly likely related to the orientation of your images

  • Avatar
    Götz Echtenacher

    Hi Ellison,

    from what you describe I figure that not all your images are actually in one component, right? It seems like you need to work on taking more / better images. The thing is that this randomness becomes especially apparent with models that are on the verge of failing.

    The size and orientation is another thing. Whereas the orientation is almost always correct (as long as there is sky involved), it is very different with the scale. The reason is that RC cannot know how big your objects are unless you give it some information. So it just guesses differently each and every time, which is what you are experiencing. The easiest way to scale it is to create two control points and define a distance between them, which of course you need to measure first on the object. It should also be covered in the help...

  • Avatar
    Ellison Castro

    @lubenko - I do not know if I can provide screenshots of the blank areas, but they appear blue when I take the orthoprojection of the model. I always use the same set of images when creating a model, there are chances that the model is lying on the grid or standing on it.

     

    @Götz Echtenacher - actually, all of the images are in one component, it never fails to align all images in one component. Is there anyway to fix the model's orientation? I am actually taking images from the air.

  • Avatar
    Götz Echtenacher

    Yes, you can rotate the model to make it appear horizontal. Although that is only internally and by eye. If you export it, then it will be oriented the way it first appears.

    When you say blue then it seems like there are holes in the mesh and you are looking at the back of the mesh on the other side. Do you simplify the model?

    To judge the issue properly it would be important to see what you're talking about.

  • Avatar
    Lucia CR

    @Elisson: if you get some blue surfaces (like you can see in a picture in the Create and export an Ortho-Photo & DSM tutorial), these are surfaces that intersect your reconstruction region, you can change the colour if you want

  • Avatar
    Ellison Castro

    Hello, I hope I am allowed to revive this thread. Here is an example of the problem I am currently experiencing. I am using the same set of images for the two models. Before creating a new model, I am deleting the initial components so the software starts again with a blank slate.

     

    Model 1:

    Model 2:

    The two models are totally different. How will I know which one reflects the most accurate model?

  • Avatar
    Götz Echtenacher

    Hi Ellison,

    this is a different issue though, right? Has your problem with the blue faces been solved?

    Do you mean that the orientation is different? This is normal if you don't have georeferenced data...

  • Avatar
    Ellison Castro

    Hello Götz Echtenacher,

    I think this is the same as my original issue of constructing a totally different model every time I align the images; if this is a different issue, I would be glad to take it out and create a new post.

     

    Thank you for your help on the blue faces, I think I understood how that works.

     

    Actually, this set of images are georeferenced. Although the orientation is not my priority, I am more concerned as to why the two images are reconstructed differently. Model 1 and Model 2 are completely different.

  • Avatar
    Götz Echtenacher

    Ah, I see - the title of the thread still fits the question.

    It's hard for me to see what you mean though. To me it seems like we are looking from completely different angles at the different models. Even the reconstruction regions seem to be oriented differently. Hence my question about the geo-referencing. Are you sure that it keeps the orientation? How did you do this, by laserscan or ground control points? Also, can you describe what workflow and settings you used for alignment and reconstruction?

    Reading your first posts it does seem like there is no geo-referencing, otherwise it would not flip around like that. A reason for the outcome being different every time is that the image set is far from ideal and close to breaking up.

    What definitely should not happen at all though is that no new component is constructed after an alignment. This is not standard behavior and if it is reproducable then it might be a bug.

  • Avatar
    Götz Echtenacher

    Is this a wig we are looking at? Because especially considering the too low coverage (your area of interest covers barely 15 % of the image, it should rather be 80%) 31 images strike me as too little for something as complex as that. In addition, if the object is as smooth as it seems on the screenshots, it probably doesn't provide enough features for reliable tie points...

  • Avatar
    Ellison Castro

    It's hard for me to see what you mean though. To me it seems like we are looking from completely different angles at the different models.

    It would seem that way. But I am curious as to why the two models are different even though I used the same exact set of images to reconstruct the models. Or I might have misunderstood this statement.

    Are you sure that it keeps the orientation? How did you do this, by laserscan or ground control points? Also, can you describe what workflow and settings you used for alignment and reconstruction?

    The sample is actually stationary while I am taking photos of it from different angles. The georeferencing was "theoretical", that is, I calculated the position of the camera (the camera was actually controlled by a machine). As for the workflow and settings, I did not change anything, so I guess the settings were in default.

    Is this a wig we are looking at?

    It is actually a mound of cotton. :P

    Because especially considering the too low coverage (your area of interest covers barely 15 % of the image, it should rather be 80%) 31 images strike me as too little for something as complex as that.

    I might misunderstand this. Although my area of interest is the cotton, the region shown in the screenshot is the default reconstruction region given when I make the model. I am not too specific on the features of the subject, I am looking at the width, length, and height of the subject if they're reconstructed correctly.

     

    By the way, I looked at the Assessing Alignment Quality tutorial at the software's help section as recommended. I read that I can know the quality of the alignment by looking at the lens distortion of a specific image, so I can know if it's bent in the right way. So I am looking now the lens distortion of the image taken at the exact normal line of the subject. The distortion shouldn't be too bent if the model is accurate. Please correct me if my understanding is wrong.

  • Avatar
    Götz Echtenacher

    Ok, in theory a geo-referencing should be possible the way you described. But where did you input the camera locations? It is similar to a multi camera rig workflow, by using XMS files (I think). In any case, the fact that it is different shows that it is not (yet) successful. The orientation is different each time because RC can only guess how big it is and which way is up and it does this differently each time. In my experience it works better if there is sky involved, but that might be my imagination.

    So a mounf of cotton is still pretty similar to a wig  :-)  as in that it has very fine features that you need to capture in your images for the alignment to work properly. What I meant by the coverage is the percentage of area that your cotton mound takes up on your images. It should be the majority because with the black background RC cannot calculate anything and that means that for large parts of the images it's just guesswork for the proper lens distortion.

    Not quite sure what you mean in the last paragraph - you can activate it in the image menu and look at the distortion grid. Sometimes, there are some cases where the distortion is completely wrong. Ideally, if you shot with the same settings, they should be more or less identical. You might get better results by using exif grouping (if you haven't already done that). If you are not sure how that works, a quick search here will provide you with ample reading material...   ;-)

     

  • Avatar
    Ellison Castro

    But where did you input the camera locations?

    I am not sure if I understood correctly; I used the "Flight Log" option in the "Workflow" tab. :P

     In any case, the fact that it is different shows that it is not (yet) successful.

    So, should I add more images? Actually, what I am doing is knowing the number of images that will reconstruct the model with the highest accuracy. Is it always the more, the better, given that the images were captured at successive angles (1.0 degree, 2.0 degrees, etc.)?

    What I meant by the coverage is the percentage of area that your cotton mound takes up on your images. It should be the majority because with the black background RC cannot calculate anything and that means that for large parts of the images it's just guesswork for the proper lens distortion.

    Ah, now I understand. Maybe I will crop some of the extras off.

    You might get better results by using exif grouping

    I will try this. Thank you!

  • Avatar
    Götz Echtenacher

    Hmm, never used flight log before - but that might be written for gps coordinates rather than local soordinates.

    In general, yes, the more the better. But there can also be too many. The distance between the images needs to be a certain proportion of the distance to the object (called base in photogrammetry). 1° is certainly much too lose. More like 10-15° and at different levels (from above and the side).

    Don't crop! :-)  That's a no-no because the geometry will not be predictable any more. Use them as they are or shoot again with less distance or more zoom...

  • Avatar
    Ellison Castro

    The distance between the images needs to be a certain proportion of the distance to the object (called base in photogrammetry). 1° is certainly much too lose. More like 10-15° and at different levels (from above and the side).

    I am only planning to capture to about 10°, though; I just need the top portion of the subject. What do you mean by "certainly much too lose"?

    Don't crop! :-)  That's a no-no because the geometry will not be predictable any more. Use them as they are or shoot again with less distance or more zoom...

    Does this also apply if I crop all of the images in the same proportion and section?

     

    I apologize for a lot of questions; I do not know much of image processing. :)

  • Avatar
    Götz Echtenacher

    Sorry, I forgot a 'c' - much to close it should read!  :-)

    The algorythms used by photogrammetry software expect a certain geometry that is typical for lenses. If you crop the images, then it will be changed in a way that RC might not be able to undistort properly. It might work sometimes, especially if your optics have very little distrotion but it really depends.

    It is still good practice to shoot from different angles (of hight) because it will help to calculate the geometry more precisely - as well of the lens-distortion as the resulting mesh.

    I recommend to take new images - it is a common "mistake" to try and fix an existing image set. If you have the chance to go back and take new images, it is almost always much quicker and more effective to do that.

    In theory, you will not need to give RC the coordinates of every single camera, a strategically placed selection will be sufficient. Maybe even better because those calculated values will probably never be perfectly accurate and then RC can adjust the positions to properly fit the geometry...

  • Avatar
    Ellison Castro

    Well, part of the experiment is to not change the height of the camera, so I guess the estimation will be off. I do have a chance to re-take the photos, though.

     

    Thank you for your assistance!

  • Avatar
    Tom Foster

    Is it advisable to delete all Components, when hoping that more Alignment run(s) will improve matters? I understood that only the small-image-count Components should be deleted. Tho I understand that Ellison is getting full registration of all cameras every time, and he's trying to get a completely virgin new attempt at the single Component. I wonder, in that case, whether cache should be cleared as well, so no remnant of old calcs.

    "1° is certainly much too close. More like 10-15° and at different levels (from above and the side).

    I am only planning to capture to about 10°, though; I just need the top portion of the subject."

    This must be part of the randomness? AFAIK all images showing less than 10o difference in view direction of any given object should be treated as harmful, confusing near-duplicates and eliminated (and not greater than 30o).

    Also, effective panoramas - photos taken from near-enough the same camera position - are useless.

    Ellison, do I understand that your entire range of camera movement is only10o, with numerous positions in between?

  • Avatar
    Ellison Castro

    Is it advisable to delete all Components, when hoping that more Alignment run(s) will improve matters?

    I think so. I read before here in the community that it somehow stores the data from the previous calculations, which I think is true; when I was aligning images using a new set images, the images do not align initially, but with a few trials the software was capable of alignment.

     

    Ellison, do I understand that your entire range of camera movement is only10o, with numerous positions in between?

    Yes, I have data taken at 1°, 2°, and so on.

  • Avatar
    Tom Foster

    AFAIK that's much too close-angle - such shots confuse each other as the trigonometry creates exponentially-increasing liability to depth error. Imagine a tiny base line then lines back from its endpoints, converging at 1o - where will they cross? the slightest difference in angle or length of base line makes a huge difference to the depth at which they cross. I understand that 10o is the minimum safe convergence angle (and 30o is the widest that RC handles, for different reason).

    Pics which show the same object viewed at angles less than 10o apart should be eliminated - am I right, team?

    That's OK, because you should be shooting your fluffball from all points on a hemisphere, i.e. from side, top, bottom, not just dead-ahead. RC depends on getting a range of different-angle shots of every feature.

Please sign in to leave a comment.