Recently, I noticed that some of my results were not es good as I expected. Because even though the geometry was very good and precise (high detail with some careful simplification), I still ended up with a texture that was considerably less crisp than the closest images provided. I am not talking about problems caused by alignment or too low texel size, just the resulting texture showing way less detail than should be possible. And also I am talking about the real texture as seen on an orthophoto and not the sometimes problematic reduced texture on the mesh as seen in 3D sweet view.
So I left most of the images with a greater distance to the mesh surface away and voila, the result was much crisper. That means that RCs algorythms are not ideal in that respect, as opposed to Zuzana'S statement in this thread: https://support.capturingreality.com/hc/en-us/community/posts/360003794351-what-photos-are-used-for-texturing-
Did anybody else have similar observations and/or does anybody have a solution how to deal with it other than switching off certain cameras? Ideally RC should weigh the texture of close cameras higher than further ones (or rather prefer the ones with a higher resolution on the surface).
Please sign in to leave a comment.