During random experimentation, I saw something interesting. Probably has something to do with poor alignment of certain images. Wanted to see what you guys think.
In some situations a model generated with 2 images looks better than one generated with 300 images.
Here is a part of a model made with ~300 images:
Then I made a new model with only 2 images which pointed to the same area of interest and got the following:
Of course, the second one only models a small section of the first. But the overall appearance of the properly generated parts of the second model look much better than the first. As expected, there are a lot of areas missing compared to the first.
So the question is 'Why?' Is this to be expected?
Would the first model have more error associated with trying to align more images?
I thought maybe texturing with many overlapping images is the culprit, so I only used the same two images as used in the second model (unchecked the 'Tx' on other images). Still looks worse.
Any thoughts / advice / suggestions would be appreciated.
Please sign in to leave a comment.