Coaxing/ determining distortion parameters for fisheye

Answered

Comments

15 comments

  • Avatar
    admin
    Hi Ben,

    First of all make sure you set "unknown" radial distortion to lens priors. Select one or more images and change "Prior lens distortion":

    rd-unknown.JPG


    If your optics is constant, you might want to set camera parameters grouping. Set the same number to "Lens group" and "Calibration group" to group camera parameters. RC can do this automatically based on exif - click "Images" root in 1Ds and select group by exif.

    If this did not help - the problem might be that we currently support optics with < 180 angle (Brown model). So points at the edges are not modeled very well. Making Image cutout might solve this or also changing the camera model. Our observation is that "Division model" (you can change camera model under under Alignment\Settings\Advanced) behaves better for fish-eye. You might use the division model first and then change it to "Brown" and click align to optimize data. This is recommended if you mix different cameras.
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    Thanks for the info. I've got those down already, and have had a bit of success with those.. Will try division later on.

    In addition to that I've also found starting with a higher reprojection error can be very useful. Seems to be working well for both circular fisheyes and full frames.
    Capture.PNG


    • First alignment with error=8
    • If more than one component produced (and you're only expecting a single component), try a second alignment with error=8
    • If more than one component after second alignment (and only a small number of components) add CPs and re-run alignment. If many small components were created there may be other problems.
    • So far I've been wimping out and running extra alignments after that, reducing reprojection error down to 4 and then 2, but could possibly skip straight to 2.

    This is potentially more useful in this case than increasing Preselector features and Max features per image (so I can possibly go back to 40000 and 80000).
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    Here's a screengrab of a test alignment of 2 image sets shot a couple of weeks apart (different colour adjustment for me to identify each set) There are 2 problem areas in this set which were interesting.

    When aligning just a subset of images in region 1 everything aligns OK, but when there are a larger number of images the distribution of the tie points seems to change a bit resulting in a drifting of the alignment. The two problem areas do have something in common relating to the positioning of cameras and this seems to be contributing to the alignment issue.

    Region 1 is a short section of images containing a single camera path that travels along the footpath and then returns back to the start. The alignment drifting can be fixed in this instance by renumbering the images so that the misaligned images are listed just after the overlapping images from the other end of the camera path, but in practice I'll adjust my shooting strategy to avoid this.

    Region 2 has a similar problem so at least I know that one method of going around a corner works and one doesn't (work well). A second alignment will join the other small component which consists of a small diversion of the camera path and the return path.

    Masking the area outside the circular image does seem to make a difference, although that is already a part of my pre-processing workflow along with masking the white sky. I was masking inside of the circular image to remove the crappy outer area with lots of chromatic aberration and I've added a crop to just outside the image circle now, which reduce the fov of the image file to about 170°... again, seems to help but need to run a few more to verify.
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    Think I've taken it about as far as I'm going to get. I'll have to add a second camera path at a different height to get some of the overhangs and that should help get more detail in the tree ferns. Here's a point cloud of a "normal" reconstruction (outliers removed in CloudCompare). ~30M points. It's pretty easy to spot where I took extra shots to increase the level of detail, although normally I'd do those with a full frame fisheye or wide angle.
    http://files.digitisation.unimelb.edu.au/potree/pointclouds/rc-fisheye-test.html

    Re-ordering the images made a significant improvement to the alignment in this case so I'll adjust my shooting strategy accordingly.
    0
    Comment actions Permalink
  • Avatar
    brennmat
    The potree point cloud looks pretty good. I'm curious what happens when you try meshing and texturing it. I've done a number of tests with an 8mm fisheye lens and the geometry construction is great, but you really see the limits of using a fisheye when you try to texture it and end up with lots of blurry, stretched parts.
    0
    Comment actions Permalink
  • Avatar
    Stuart
    I too am using a wide angle lens 170° on a GoPro type action cam, I have been using video and mixing that out as jpg's at 2-5 frames per second depending on length of the sequence. The results I have got are very good, the meshes are clean and detailed, I have not had any issues with stretched textures like you Brennmat, perhaps you might need to crop your source images to remove some of the distortion.

    I have had a few issues with my setup, I do tend to get warped meshes on objects that are further away, for example the tops of buildings.

    What would be the correct workflow when working with wide angle lenses and what distortion settings should we use?
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    brennmat wrote:
    The potree point cloud looks pretty good. I'm curious what happens when you try meshing and texturing it. I've done a number of tests with an 8mm fisheye lens and the geometry construction is great, but you really see the limits of using a fisheye when you try to texture it and end up with lots of blurry, stretched parts.


    The point cloud is actually the vertices of the mesh from RC. As for textured model, here is the upper section. https://skfb.ly/KUE9. If you have blurry, stretched parts of a model it's due to misalignment and /or incorrect distortion parameters. For image sequences with small intervals between shots the results usually optimise quite easily but when you get more complex geometry and multiple camera paths things can go wrong.

    I've processed a few building shot with a GoPro and everything worked fine but a larger, more complex building shot with the same camera exhibited weird "ghosting" of misaligned components due to incorrect distortion parameters. This was largely due to the selection of camera paths by the pilot not suiting the algorithms used by RC. Using the process I described here, I got it to align nicely as well as could be expected with just 2 alignments, and had a complete model after adding a few CPs to join the 2 components produced.

    Stuart wrote:
    What would be the correct workflow when working with wide angle lenses and what distortion settings should we use?


    To summarise this thread so far:
    Group the cameras
    Lens distortion = Unknown
    First alignment with Max reprojection error = 8
    Repeat once if multiple components produced
    Add CPs if required
    Final alignment with Max reprojection error = 2 (or lower if you're brave ;) )

    I haven't found any benefit trying higher reprojection errors, or gradually reducing it to get to the final alignment. The shooting sequence also seems to play a part, but for highly structured camera paths this seems to be reasonably robust. Let me know how you go.
    0
    Comment actions Permalink
  • Avatar
    Stuart
    Thanks for your reply Ben, how do I re-align my images? When I change settings and hit the align button I just get a new duplicate set of components. I tried the force component rematch but that had no effect.

    I must be missing part of this workflow somewhere, I am quite new to this.
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    I'd remove all components if you're switching to this from a default alignment (or just start a new project). Here are my alignment settings (for 20mp images).
    specific ones to note are sensitivity, overlap, preselector features and max image features. For the latter 2 I haven't found any significant difference between 60,000/120,000 and 80,000/160,000 that was suggested to me... except I only have 32Gb of RAM and found that an alignment of 2,500 images with the higher settings could run out of RAM in the last stage of alignment. For 12mp images, 40,000/80,000 is adequate (unusual to get more than that anyway)
    0
    Comment actions Permalink
  • Avatar
    Stuart
    Thanks Ben, that has made an improvement to my scans.

    I misunderstood that the alignment process could be iterative it makes more sense now.

    What exactly does the "preselector features" value do?
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    I'll let someone else provide an actual definition but together with max image features they basically determine how many points are used to align 2 images. In my experience it takes more points for fisheye images to determine good values for lens distortion, without which you don't get good alignment. For GoPro images you can get away with 40,000/80,000 as it doesn't often find more than 80,000 image features.

    More = theoretically better but longer processing, more RAM. Less = Faster but too low results in bad alignment
    0
    Comment actions Permalink
  • Avatar
    Stuart
    Preselector Features is a very important value for increasing the quality of your scans, increasing this has allowed me to reduce the reprojection error value back to 2-4.
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    Stuart wrote:
    Preselector Features is a very important value for increasing the quality of your scans, increasing this has allowed me to reduce the reprojection error value back to 2-4.


    After an initial alignment with reprojection error, I run another alignment with reprojection error at 2. Starting off at 2 seems to drop a lot of tie points around the outer portions of the image, with the result that the lens distortion parameters are only effective in the middle of the image. By starting off at 8 you get lens distortion parameters that are better across more of the image, but not particularly great anywhere. This at least gives the second alignment a better starting point for the lens distortion parameters... hence the "coaxing" part of the title.
    0
    Comment actions Permalink
  • Avatar
    brennmat
    Ben Kreunen wrote:
    The point cloud is actually the vertices of the mesh from RC. As for textured model, here is the upper section. https://skfb.ly/KUE9. If you have blurry, stretched parts of a model it's due to misalignment and /or incorrect distortion parameters. For image sequences with small intervals between shots the results usually optimise quite easily but when you get more complex geometry and multiple camera paths things can go wrong.

    I've processed a few building shot with a GoPro and everything worked fine but a larger, more complex building shot with the same camera exhibited weird "ghosting" of misaligned components due to incorrect distortion parameters. This was largely due to the selection of camera paths by the pilot not suiting the algorithms used by RC. Using the process I described here, I got it to align nicely as well as could be expected with just 2 alignments, and had a complete model after adding a few CPs to join the 2 components produced.


    I generally don't get "blurry areas" any larger than the blurry areas seen in the sketchfab model you posted, which is fine if it's a model of grass or rocks, but when it's a detailed frescoed or painted interior or vault, they are quite noticeable. Unfortunately, I think it's just a part of the stretching and blending that both Photoscan and RealityCapture do to the photos when they make the texture.
    0
    Comment actions Permalink
  • Avatar
    Ben Kreunen
    I've nearly always been able to identify slight camera misalignment as a cause for blurry areas of texture in my models, including orthophotos of laneway walls. Often visible as defects/noise in high resolution mesh. Would be interesting to see a texture created with cameras selected by proximity to surface normal.
    0
    Comment actions Permalink

Please sign in to leave a comment.