I had a question regarding merging components and how it differs from using the align images with respect to the relative positions of the cameras. I'll set the stage if you will:
- I have a set of images that cover about half a mile of a street. This has been aligned into a single component.
- I have a set of LIDAR that cover the same stretch of street, also a single component.
When I merge them together using control points and merge components I discover that the further you move away from the center of the scans the worse the texture - geometry alignment is.
My guess is that there is much more accumulated error in the photogrammetry alignment than there is in the LIDAR alignment.
I suppose that when using Merge Components the images' positions and orientations aren't corrected in any way and a "best-fit" scenario is being performed. I imagine that if you were to hit Align Images (but locking the LIDAR in place) you may end up with a much more accurate alignment between the images and the LIDAR but I'm worried that due to the fact that the LIDAR has no color (only intensity) and that there are >20,000 images it will take quite some time for the alignment to complete, if at all.
Is there a streamlined method for aligning two such sets but only doing a minor overall error-adjustment on a photogrammetry set without it having to take several days? What combination of Absolute Pose settings or Relative Position Uncertainty or any others would help here? Or do I totally misunderstand what both Align and Merge do?
Please sign in to leave a comment.