photogrammetry and scan




  • Avatar

    It seems to be a classic photogrammetry issue. When you have not enough overlap it tends to do that. 

    Black represents gap between the photos and each color a photo overlap. Remember that to create a point in XYZ by photogrammetry you need at least two photos. So, if it is covered by one "layer" of photos only, and there is no second overlapping photo for this specific area, points won't be reconstructed. The difference may also occur when you have a significant difference with the number of overlapping photos of that part. That is why, when I need to be sure, to have really dense model, I'm taking two rows of photos to give 200% coverage for every part.

    Do you know how aligning points from photos and lidar data works? Is it just a final cloud generated from photogrammetry registered with LIDAR or every pair of the photo is "pinned" to corresponding points one by one? And what it does when points are doesn't suit perfectly. Does it create another layer of points with a small offset (noise generally) due to errors, or there is an algorithm that will erase all points that have too big error taking as a reference LIDAR data?


    Comment actions Permalink
  • Avatar
    Erik Kubiňan CR


    We are sorry for the late answer.  

    Your model is most likely missing some information. Try to add more photos and/or another scan position. Density of the point cloud reveals how much detail has been captured and how detailed the object you are shooting is. Getting closer and providing RC with more information will result in a much more dense point cloud and a better model. 

    Comment actions Permalink

Please sign in to leave a comment.