Optimising Alignment

Comments

12 comments

  • Avatar
    Götz Echtenacher

    Hi Tom,

    thanks a lot for sharing this painstakingly gathered information!

    I find the 62k very interesting - I guess it is good way to determine the maximum use of cache for feature detection.

    The reason why the alignment errors do not change is simply because you did't change (I figure) the preselector setting. This setting tells RC how many of all detected features shall be creamed off the top (as in only the best) and used for alignment. This is also a reason why raising the max features setting will not necessarily provide you with a better alignment since only the best ones are (by default) used.

    0
    Comment actions Permalink
  • Avatar
    Tom Foster

    The 62B can't be found by dividing this number by that in Alignment Settings and Report -

    it's found by trying different Max features per image and Max features per mpx Settings to find the lowest figures (Optimax) that still give same Alignment Report results as the first mega-unrestricted run.

    So far, I've found it consistently works out at 61-62B (as above) across several differing photo sets, for all Sensor sensitivity Settings - except with Sensor sensitivity Ultra it seems to work out at 52B! (Edit - forget that - 62B it is)

    0
    Comment actions Permalink
  • Avatar
    Tom Foster

    About Alignment errors and Preselector - after more trials I'm still finding that Preselector makes no difference at all to Mean and Median error (or any other Alignment report parameter).

    Trying a set of 29 good photos in different scenarios:

    1) with Max features per mpx and Max features per image set very high so that features detected are well unrestricted:

    2) with ditto set at the Optimax that just barely leaves features unrestricted:

    3) with ditto set a bit lower, so that a slight top-cut is applied to features detected:

    4) with ditto set to drastically restrict featues detected:

    In all these, whether Preselector is set at default 10k, very high at 100k, or reduced in 1k steps from 9k down to 1k, Mean and Median error changes not at all.

    What does happen is that from 5k downward, getting all 29 photos almost always registered in one Component ends, deteriorating to 28, 27, then to multiple Components registering 2, 2, 3, 17, 2 photos.

    0
    Comment actions Permalink
  • Avatar
    chris

    preselector features, gets used in the next stage of alignment.

    adjusting this can massively affect ram usage and how long it take to align.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hi chris, thanks for the input! Good to know we're not alone...  :-)

    I think it's also worth pointing out that cranking up the number of detected features will not improve alignment as long as their number is higher than the Preselector, since the best ones will be selected in the first place - in both Feature Detection AND Preselector, at least according to my knowledge.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Although since Tom is looking for a way to improve on scarcely textured plaster surfaces, it could help to get those. But again, only by raising the Preselector, which is the number that will be used for the alignment and no more.

    0
    Comment actions Permalink
  • Avatar
    Tom Foster

    Yeah thanks chris. By 'gets used in the next stage of alignment' do you mean Reconstruction i.e. after Alignment is complete?

    I can see more tests for what's been mentioned above (not convinced) but having to leave it till after w/e - back then.

    0
    Comment actions Permalink
  • Avatar
    Tom Foster

    Optimax and Optimin

    In my original post above I was taking about optimum settings of Max features per Mpx and Max features per image, just barely large enough so that neither will restrict Points count or Total projections.

    This is to say that I’ve gone back and edited, so now, that kind of optimum I’m calling Optimax. Because there’s another kind of optimum, that I’ll call Optimin. I’m less sure how it cd be useful, but it at least aids understanding of what RC’s doing (there could even be an Optimean!).

    In my original post I said:

    “If either or both (of Max features per Mpx and Max features per image) are even a little lower (than Optimax), the MB of the largest Cache file and the total MB of the Cache folder will be a little smaller …

    It’s interesting, in Cache, that any lowering of the Cache file sizes takes the form of several (or all) at a top figure (rather than just one at the topmost figure), showing that several (or all) files are now being top-clipped.”

    In fact the pattern at Optimax is that the solitary highest Cache file may be say 8.2MB, the lowest may be 5.2MB and the other 27 scattered in between.

    If Max features per Mpx and/or Max features per image are set a bit lower than Optimax, then the pattern becomes that several highest Cache file may be top-clipped to a reduced 8.0MB, the lowest stays 5.2 MB and the others scattered in between.

    If Max features per Mpx and/or Max features per image are set lower still, the pattern eventually becomes that all Cache files become top-clipped to that ‘floor’ of 5.2 MB. This I call Optimin. The ‘several highest’ kinda eat up the ‘others scattered between’ until all are top-clipped to 5.2 MB.

    If Max features per Mpx and/or Max features per image are set even lower than Optimin, then chaos breaks out as the 5.2 MB ‘floor’ is breached and Cache files become random(?) at less than 5.2MB.

    Above, I showed how to establish Optimax:

    “Note the MB of the largest file, denoting the best, most detailed and useful photo in the photo set – say it’s 8.2MB.

    Divide it by 62B – 8.2MB/62B = 132KB approx.

    132KB is the Max features per image Setting that is just sufficient to capture, without restriction, all of the Features which RC is capable of extracting from the best photo in the photo set (Optimax).

    Divide it by the mpx of the camera – say 12.1Mpx – 132KB/12.1Mpx = 10.9KB approx.

    10.9KB is the Max features per Mpx Setting that is likewise just sufficient to capture, without restriction, all of the Features which RC is capable of extracting from the best photo in the photo set (Optimax).”

    Optimin is established the same way, but using the ‘floor’ figure of say 5.2MB instead of the solitary largest figure of say 8.2MB.

    Thus 5.2MB/62B = 84KB approx. Max features per image (Optimin),

    and 84KB/12.1Mpx = 6.9KB approx. Max features per Mpx (Optimin).

    (The 62B divisor remains same).

    0
    Comment actions Permalink
  • Avatar
    Tom Foster

    Detector sensitivity

    In my original post above I said

    “Of the other Alignment Settings, I’ve found that only Detector sensitivity affects all the above – but not a lot unless it's a really poor-textured set of photos.”

    With a good set, going from Detector sensitivity Med>Hi>Ult, Cache size (proportional to Features detected) steps up so-so 16, 6% but resultant Points and Projections by trivial 3, 1%. Going from Med>Lo Cache size steps down huge 60% but Points and Projections less - only 20%. Preselector variation had no effect on this.

    With a poor set, going from Med>Hi>Ult, Cache size steps up amazing 150, 66%, biggest Component improves, and Points and Projections step up by 60, 50%. Going from Med>Lo Cache size steps down catastrophic 580% but Points and Projections only 35%!.

    It’s hard to imagine what Detector sensitivity Lo is good for!

    Stay away from Lo, and Detector Sensitivity hardly figures for Aligning a good photo set but is vital for Aligning a poor set.

    Detector sensivity Ult is supposed to be only for multi-camera rigs – but seems to me indispensable for a poor set.

    0
    Comment actions Permalink
  • Avatar
    Tom Foster

    Preselector

    In his first post (reply) above, Gotz said;

    “The reason why the alignment errors do not change is simply because you did't change (I figure) the preselector setting.”

    But towards the end of my original post I had said:

    “Preselector features has no effect that I can find.”

    chris then explained:

    “preselector features, gets used in the next stage of alignment.”

    To which I asked:

    “By 'gets used in the next stage of alignment' do you mean Reconstruction i.e. after Alignment is complete?”(not answered yet).

    With 29 good photos, I tried Preselector features set at 100K, 10K (default), 9,7,6,5,4,3,2,1K.

    These made no difference at all to RC’s analysis of the photo set – the same distribution of Cache file sizes every time,

    hence giving same Optimax figures (see above) – 148K Max features per image (default 40K) and 148K/12.1 = 12.3K Max features per Mpx (default 10K).

    Right down to Preselector features set at 6K, all 29 pics registered in one Component,

    and every run gave approx. 105K Points count, 265K Total projections, 44secs, 0.34px Median and 0.45px Mean error.

    With Preselector features set ridiculously low at 5,4,3,2,1K, Registration, Points count and Total projections all fell apart, but secs and errors stayed same.

    So I guess chris is right - Preselector irrelevant to Alignment, only comes in with Reconstruction - will be checking it out shortly.

    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin

    Sorry guys, but this experiments have zero meanings.

    Max features per MPX or Per image, image overlap, preselectors, detector sencitivity, etc. All this below to SIFT, SFM, FAST etc. methods using in Alignment step.

    Default settings perfect for any correct dataset.

    Bad datasets, or datasets with masking, datasets with low overlap, datasets with weak textures, etc. can required fine tuning. But this fine tuning is not about how many features you choose. This about dataset itself. You see source data, you understand which settings you need adjust.

    But what i can say. More features you try to detect, more preselectors you set, more memory RC will need for Aligning.

    More features than needed on bad dataset - worse results you have due to false positive errors.

     

    To be honest, for correctly align cameras and estimate lens distortion need only 10 detected features. 100 is more than enough. 1000 is perfect for any situations. This mean do not rise Preselectors and features more than needed.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hi Vlad,

    thanks for your input.

    Obviously, the image sets that we are discussing are not ideal. The world just doesn't always provide perfect conditions.

    And its about understanding which setting causes what kind of response.

    That this is not always predictable and varies greatly with the image set is clear (at least to me).

    Nevertheless there are people who haven't grasped all of it (including myself).

    So what you said about 10 features being enough for a perfect image set is quite helpful in some way. On the other hand, it doesn't help with solving something difficult.

    I just had a really bad project, a building with lots of vegetation around it (which I could not remove). So there I was with a certain budget having to make the best of it. In my case, rising the Preselector Features from 10 to 20k did the trick. I finally got components big enough to work with. Adding more than 100 CPs was still a lot of work, but I got a useful result in the end, with all 20 GCPs having an error below 5mm. I can live with that.

    @Tom:

    Sorry, you are right - Preselector is of course for the next step, alignment.

    I have to agree with Vlad a tiny bit as in that I think this is mainly a way (for you) to understand how RC works and what all the settings mean. That doesn't mean that it's unneccessary, on the contrary, it may be just what many people need to know - if they have the patience to read through it all (which might be difficult for the snapchat-generation nowadays...  :-)

    0
    Comment actions Permalink

Please sign in to leave a comment.