How to optimize Reconstruction

Comments

34 comments

  • Avatar
    Jonathan_Tanant
    We definitely need a tie points filtering feature. Align first with a high max error, then filter out.
    1
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hi Benjamin,

    on CP weight, there is something covered in this thread: Manual improvement of alignment / sparce point cloud
    Just try it out for yourself, there is no limit afaik. I used 200 before. The number apparently represents the number of images it simulates, so if you have the CP with alignment issues covered by 20 images, a weight of 20 will result in 50/50 between RCs automatic result and your manual one, that's how I understand it. It's also great for identifying errors in CP placement because with high values it will screw up an alignment big time, making it easy to pinpoint the source. Having used CPs excessively, I know that despite being extremely careful, it still happens to misplace one or two out of a couple hundred (on individual images I mean).

    I don't see why in theory you schould not be able to basically make a manual alignment by many CPs. Although I am also quite certain, that your mentioned artefacts are due to minor errors in CP placement. With the above mentioned method I was able to finally resolve a troublesome project - with the last identified error all of a sudden everything snapped into place. I won't say how long it took though... :D

    As I understand detected features are stored in the cache, not alignment results. That means if your source images don't change, no point in deleting the cache. If you have trouble with an alignment, deleting all components is sufficient. I know that because I use GCPs (by theodolite survey) for buildings, and sometimes, with for example a very long wall, there can be a slight distortion. Once I added my GCPs, I expected the model to be "pulled" to the real coordinates, but that was not the case. Only after deleting all components, it finally worked. So it seems that older components will influence further alignments heavily, whereas GCPs will only assist to a certain degree. Eliminating all components is like a reset, proving that assumption.

    I think that yes, RC readjusts each camera with every new alignment. Just compare the same source images by activating different components - the distortion values and coordinates will slightly change.

    A different way to resolve the missing tie point filter:
    Align the troublesome part in a-software-we-all-know and use gradual selection there to get rid of the TPs with high reprojection error. Then export the images undistorted and then use those within RC. In my case, that did improve the result in RC quite a bit.

    Saying this, I just had an idea. Furhter up I argued that older components influence the result. So it might just work to have a first alignment with low reprojection error (might result in many components) and then another one with slightly higher and so forth. Or it could work the other way round. Who wants to try it out? :-)
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Hello Götz,

    Many thanks for the detailed response. You've clearly given this area a good shake, addressed several ideas I've anticipated being most relevant to this question of manual intervention and what distinguishes working with RC from fighting it (operator error) or spinning tires (RC simply won't go but so far leveraging what's possible with manual intervention).

    Yes, I've seen how deleting Compositions clears stored detected features, which you also see reflected in changes to reprojection errors per CPs, or I guess the manual version of those are actually called tie points - will stick with the shorter CP. That's an interesting thought to use Components with lower reprojection error residing next to other Components with more aligned cameras but slightly higher reprojection error in combination. Two thoughts about that; I've run into the problem of trying to export an older Component sitting next to a younger and gotten the red screen reporting RC can't locate images referenced in the Component. Deleting the younger Component and in addition sometimes running Alignment on the older would resolve that problem. Not that we're concerned here with exports, but just to point out other messiness lying in wait with older and younger Compositions referencing the same imagery.

    On the plus side, I can see your idea in part mirrors what I just tried and got it to work (in part) by playing with the order in which images are enabled in an Alignment and growing the model like rows of bricks one locked to the next, rather than throwing all the bricks at RC to figure out the wall. I still see the stepping, as you described it in your linked thread, but now am thinking to add 10 or 10s of CPs and perhaps play with increased Weight to see if that reduces or fixes the step.

    I'd like to confirm my understanding of how the Points Lasso and Filtering might be used to keep the majority of (good) points in a Composition (to save processing time), kill the offending points in a stepped section, add more CPs, increase their weight (relative to how many cameras are referenced in that section, as you clarified), then run Alignment.

    Regarding my questions about correctly using Force component rematch and Merge components only, if the detected features (points) are stored in a Composition, then these two settings would appear to only control what they say, keeping a Component from splitting into two and a black&white attempt to merge any active Components (their cameras enabled) into a single Component. And if that's correct and nothing more, I'd think these settings do nothing to provide or remove "plasticity" in the points. Yes?

    Another way of looking at that is to think of points having weight, so what I'm after is to tell RC, give these CPs lots of weight, give RCs points much less, so will those two settings have no bearing on the latter? I'm thinking filtering out points in the distorted or stepped section of a Composition in combination with heavily weighted CPs, while not allowing competing Compositions to poison the waters, may represent the most one can do at this stage in RC to optimize manual intervention. Thanks for your feedback, Götz, and thanks WishGranter, if you can further clarify this topic.

    Best,
    Benjy
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hi Benjamin,

    Yes, I did puzzle over it a bit - as well as you! :-)

    Benjamin von Cramon wrote:
    Yes, I've seen how deleting Compositions clears stored detected features

    Do you mean "components"? Because I think features (so to speak the potential tie points on each image) are stored in the cache, whereas the alignment is stored in the component...
    Benjamin von Cramon wrote:
    , which you also see reflected in changes to reprojection errors per CPs, or I guess the manual version of those are actually called tie points - will stick with the shorter CP.

    Since there is a fundamental difference between the two, rather call tie points TPs... :-)
    Benjamin von Cramon wrote:
    I've run into the problem of trying to export an older Component sitting next to a younger and gotten the red screen reporting RC can't locate images referenced in the Component.

    Yes, it happened to me too and the cause has not been explained to me to my satisfaction. I think it might have to do with the missing tie points in the cache somehow, but it does not seem to be persistent behaviour...
    Benjamin von Cramon wrote:
    Not that we're concerned here with exports, but just to point out other messiness lying in wait with older and younger Compositions referencing the same imagery.

    Hmm, I did not consider the multitude of components as a cause for that behaviour - I will check that next time (which of course I hope there won't be...)
    Benjamin von Cramon wrote:
    I'd like to confirm my understanding of how the Points Lasso and Filtering might be used to keep the majority of (good) points in a Composition (to save processing time),

    I'm not quite sure what you mean by that - you can't delete any points with the lasso tool, only highlight them to identify cameras.
    What I did was to raise the weight of several CPs where I suspected an error to 200. After a new alignment the resulting errors at each image within a CP became quite high, the faulty ones even higher. I corrected those those and lowered the weight again.
    Benjamin von Cramon wrote:
    Regarding my questions about correctly using Force component rematch and Merge components only, if the detected features (points) are stored in a Composition, then these two settings would appear to only control what they say, keeping a Component from splitting into two and a black&white attempt to merge any active Components (their cameras enabled) into a single Component. And if that's correct and nothing more, I'd think these settings do nothing to provide or remove "plasticity" in the points. Yes?

    I'm sorry, but my grasp on those settings is rather weak. I would imagine that Merge Components will keep the individual alignments rigid within themselves, but what happens if several images are in both? I guess that looking closely at some images and compare their values regarding distortion and position will clear that up. As to Force Component Rematch - no clue whatsoever... 8-)
    Benjamin von Cramon wrote:
    Thanks for your feedback, Götz, and thanks WishGranter, if you can further clarify this topic.
    No problem and thank you too, because it helps me as much! And Wishgranter will probably say something entirely different alltogether... :lol:
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    My Gosh, I'm even more confused now than I suspected. Firstly, no idea why I kept referring to Compositions. Components, of course. And now I can't even add the embarrassment emoji, where does that live? Too technical to use emojiis?!

    I thought I read in the Help that Control Points were technically what you acquired in the field to establish ground control, that these could be imported (or are those Ground Control Points, GCPs?), that Tie Points where what one manually placed on common features (though that doesn't make sense, given you have to enable the Control Points button), and that "detected features" is used to describe what RC generates during Alignment and which shows up as the point cloud, otherwise termed "vertices", in 3D view. No? I need to get the nomenclature straight. This also doesn't make sense, as when I'm in 2D view, Image tab, I then see the Tie Points checkbox, which displays what I just called "detected features". Are then not Tie Points or TPs the features RC generates during Alignment?

    Regarding older/younger Components causing issues, I just reread my post (Exporting Registration -- Operation failed) and see I misremembered what WishGranter advised, not to delete younger Component, but to select all images in older Component, make sure Features source is set the same for all, best with "Use all image features", click Align, wait a few secs while RC makes connection to all imagery, then abort, then export should be possible. Anyway, different issue, but I'm now thinking I may have confused the matter by claiming that younger Components can screw things up for older because of a difference in features or stored camera orientations, might simply be that on the way to a younger Component the Feature source setting for images got changed, for some, but not all images referenced by the older Component. Regardless, I'm left thinking that it's a good practice to delete older Components as real progress is made with younger ones, and if you have to step back, then delete the younger.

    About the Points Lasso tool, totally got that wrong. With my idea to use it to remove bad TPs (auto-detected features), I never even tried to open the Filter tool, just assumed. You're right, doesn't work that way. So how do you use points selected with that lasso to identify relevant source imagery? I lassoed some points, switched to 2D view and 2Ds, scrolled and never saw anything jump out to point at source images. If Tie Points/detected features are only for display, then it would make sense that there's no merit to deleting them, explaining that behavior.

    With Force Component Rematch, RC preserves Components' cams count during Alignment, meaning that a new (young baby) Component version won't contain fewer cams than any of its older parents, e.g. a 20 cam Component and 120 cam Component being aligned may produce a single 140 cam Component OR nothing new, just copies of the older parents. That's using True, with False you might get some fractionally better outcome, like all 120 from parent 1 and only 10 from parent 2, so one new baby gets 130 cams and that other gangly mess of cell tissue only 10. Sorry for the visual.

    With Merge components only RC goes for the gold, it's my understanding, either produces just one mega baby Component from all other Components with enabled images or nothing.

    What I did was to raise the weight of several CPs where I suspected an error to 200. After a new alignment the resulting errors at each image within a CP became quite high, the faulty ones even higher. I corrected those those and lowered the weight again.


    That doesn't make sense to me. If you raise the Weight of CPs, implying you're a) really confident of those features appearing the same from different perspectives to use them and b) confident that you executed nicely on placement, then why would the reprojection error climb higher (which I've seen, and which motivated this whole thread). You'd think the point of that exercise would be to make the camera orientations or poses (or is it the TPs or detected features?) conform to the CPs, they having greater weight. From your earlier post:

    As I understand detected features are stored in the cache, not alignment results.


    Okay, so detected features (TPs?, thanks for clarifying, WishGranter) are stored in cache. If those features, as seen in the point cloud/vertices of a Component, don't influence new Alignments and are only for display, you're then saying that influencing Alignment via manual intervention happens with new CPs in combination with choice to delete or to keep a Component, which you say only stores the camera alignments (6 numbers for position and rotation)s. Yes? If so, this then simplifies the question of what settings and actions are left on the table to influence an Alignment manually and how to control for plasticity or give in a growing model.

    Despite the limited understanding, I'm actually getting closer to correcting the distortions and closing the stepping, and that's good, but only goes so far. Without knowing why something worked, you're just inviting future hit and miss. Real traction comes from greater clarity. Thanks, WishGranter or others for setting the record straight. Always good to chat with you, Götz.

    Benjy
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Benjamin von Cramon wrote:
    My Gosh, I'm even more confused now than I suspected. Firstly, no idea why I kept referring to Compositions. Components, of course. And now I can't even add the embarrassment emoji, where does that live? Too technical to use emojiis?!

    Don't be too hard on yourself! Happens to everyone! :D This is such a freakishly complex field, I am nnot sure there are many people who know everything. And I learn something new almost every day I work with RC. Last night I watched a review of a drone - the guy seemed pretty experienced (with drones) but didn't know what Exposure means... :shock:

    Benjamin von Cramon wrote:
    I thought I read in the Help that Control Points were technically what you acquired in the field to establish ground control, that these could be imported (or are those Ground Control Points, GCPs?), that Tie Points where what one manually placed on common features (though that doesn't make sense, given you have to enable the Control Points button), and that "detected features" is used to describe what RC generates during Alignment and which shows up as the point cloud, otherwise termed "vertices", in 3D view. No? I need to get the nomenclature straight. This also doesn't make sense, as when I'm in 2D view, Image tab, I then see the Tie Points checkbox, which displays what I just called "detected features". Are then not Tie Points or TPs the features RC generates during Alignment?

    CPs are, as you say, manual TPs. When you add some coordinates, they turn into GCPs, but still have the "lower" functionality. If you want to avoid any influence, I guess you need to set Weight to 0.
    In this case, I am pretty certain that Detected Features are the ones in the 2D imagery, whereas Tie Points are the 3dimensional result. So basically in each alignment DFs will be used to calculate TPs, generating a component. Same goes for CPs, only that they are called the same (which could be changed, I suppose, to make that clearer).

    Benjamin von Cramon wrote:
    Regarding older/younger Components causing issues, I just reread my post (Exporting Registration -- Operation failed) and see I misremembered what WishGranter advised, not to delete younger Component, but to select all images in older Component, make sure Features source is set the same for all, best with "Use all image features", click Align, wait a few secs while RC makes connection to all imagery, then abort, then export should be possible. [...] Regardless, I'm left thinking that it's a good practice to delete older Components as real progress is made with younger ones, and if you have to step back, then delete the younger.

    Thank you for clearing that up. I remember it now, I think it did not work in my case but I went a different way. And I agree with you about the general hygene. The problem is, the initial errors sometimes get "baked" into all children (I like this expression) which will not resolve until you start from scratch. Which brings us back to the initial question. So you delete all components with all hitsory of misalignment (wrong TPs) and re-align. But since you still have all the CPs that you have created and carefully improved, you will give RC a huge helping hand with the new alignment, so that these errors should not occur again. It also will not take too long, since the DFs (detected features) on the 2D images still exist in the cache, which is a big chunk of the processing time. They will now be interpreted in a different way to create more accurate TPs.

    Benjamin von Cramon wrote:
    About the Points Lasso tool, totally got that wrong. With my idea to use it to remove bad TPs (auto-detected features), I never even tried to open the Filter tool, just assumed. You're right, doesn't work that way. So how do you use points selected with that lasso to identify relevant source imagery? I lassoed some points, switched to 2D view and 2Ds, scrolled and never saw anything jump out to point at source images. If Tie Points/detected features are only for display, then it would make sense that there's no merit to deleting them, explaining that behavior.

    Yes, the lasso is there to mark TPS (the 3D ones). You can for example press Find Images in the Alignment Tab and then all the cameras (images) that include DFs to form those TPs. A hugely helpful tool to identify cameras with problems (if e.g. you mark stray TPs or one layer of a doubled surface).

    Benjamin von Cramon wrote:
    With Force Component Rematch, RC preserves Components' cams count during Alignment, meaning that a new (young baby) Component version won't contain fewer cams than any of its older parents, e.g. a 20 cam Component and 120 cam Component being aligned may produce a single 140 cam Component OR nothing new, just copies of the older parents. That's using True, with False you might get some fractionally better outcome, like all 120 from parent 1 and only 10 from parent 2, so one new baby gets 130 cams and that other gangly mess of cell tissue only 10. Sorry for the visual.

    Bingo! I learned something new! Thanks a lot - that made the whole exercise worth it to me!!! :D

    Benjamin von Cramon wrote:
    With Merge components only RC goes for the gold, it's my understanding, either produces just one mega baby Component from all other Components with enabled images or nothing.

    So do you think that they are still flexible a bit, meaning RC will still improve the cameras relative to one another in one parent component?

    Benjamin von Cramon wrote:
    That doesn't make sense to me. If you raise the Weight of CPs, implying you're a) really confident of those features appearing the same from different perspectives to use them and b) confident that you executed nicely on placement, then why would the reprojection error climb higher (which I've seen, and which motivated this whole thread). You'd think the point of that exercise would be to make the camera orientations or poses (or is it the TPs or detected features?) conform to the CPs, they having greater weight.

    I can only tell you what worked for me. I think by raising the weight ridiculously high, you eliminate the good influence of TPs in the vincinity, thus making it easier for an error in one of your CPs to do its evil work and thereby stick out more clearly...

    Benjamin von Cramon wrote:
    Okay, so detected features (TPs?, thanks for clarifying, WishGranter) are stored in cache.

    DFs: 2D and in chache, TPs 3D and in alignment/component 8-)

    Benjamin von Cramon wrote:
    If those features, as seen in the point cloud/vertices of a Component, don't influence new Alignments and are only for display, you're then saying that influencing Alignment via manual intervention happens with new CPs in combination with choice to delete or to keep a Component, which you say only stores the camera alignments (6 numbers for position and rotation)s. Yes?

    Ish. The Sparse Point Cloud is the representation of all TPs, which (I am pretty certain) are all stored in a component (created by an alignment) along with the camera positions. And yes, CPs influence the alignment. And yes, by deleting ALL (!) components, the result can be quite different, since RC has nothing (wrong) left to go back to. That's actually something that the guys (mostly wishgranter) suggested several times and I use it A LOT lately...

    Benjamin von Cramon wrote:
    f so, this then simplifies the question of what settings and actions are left on the table to influence an Alignment manually and [...]

    Nothing! :D That's the puzzle we are left with since the beginning. :shock: But I am pretty confident that there will eventually be something equivalent to Gradual Selection in a different software. Let's all hope that it will be sooner than later. And let's express this to the RC team! :twisted:

    Benjamin von Cramon wrote:
    Despite the limited understanding, I'm actually getting closer to correcting the distortions and closing the stepping

    That's good! Of course, as Wishgranter would point out, it is always easier and quicker to take "better" images. But there is the BIG IF: if you can go back...

    Benjamin von Cramon wrote:
    Without knowing why something worked, you're just inviting future hit and miss. Real traction comes from greater clarity.

    I cannot express how much I agree with you! This is something that I find highly frustrating at times, Sorry, this might get a bit of a rant now (not too serious though). I ALWAYS try to weasle usefull "real" info out of people here to achieve exactly that. Users need to be able to understand the mechanics in the background to be able to use the software properly on their own. And as ultra-super-helpful as many of the guys of RC in most cases are, it doesn't help in the long term to just say "set variable XYZ to so-and-such". I want to know what it does and why. But I guess that is something we "main posters" have to cover a bit. Because the problem with developing something as crazyly complex as RC is always, that the more involved you get, the less you can put yourself in the position of somebody who is totally new to it.

    Benjamin, I find the stuff you do very interesting and many problems you describe are also relevant to my work, So I am learning from our exchange quite a bit, too. If nothing else, I know that another experienced user like you has nothing more up his sleeve that I don't know about. And if something I said here was wrong and Wishgranter does drop by to say something, I will learn even more!
    So in short: also always good to chat with you, Benjy...
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Just tried something out:
    Two components with overlapping area, but no image in both components.
    Even though I set Merge Components Only to true, the distortion values differ slightly for an image in the source component and the new combined one. So I guess that even with this setting, the cameras still have some 'plasticity' as you called it...

    Also I need to correct something: the coordinates do not seem to be stored in an alignment, at least not with each individual image - or they are not displayed...

    To counterbalance my earlier rant, I now have to praise RCs precision into the sky!
    Look at this example - two storeys in a roofspace separated by floorboards and only connected by ONE chain of images.
    Even though this is highly precarious, the floor boards on the open end still have almost the same thickness (of about 25mm) as on the closed side. The Distance is about 5 m or so...
    WOW!!!

    BTW: in-camera jpegs with flash from my LX100 @ 12mpx... ;-)
    0
    Comment actions Permalink
  • Avatar
    Tom Foster
    This kind of thread on this quality of forum is a huge good reason for choosing RC over the alternatives - especially Bentley Context Capture whose forum is mega-incomprehensible and sluggish. Management please take note - the RC forum is one of your biggest assets. To know that almost any question will get a quick, wise response gives infinite confidence.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Well, thats nice to read... :)
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Thanks Tom, much appreciated. Pick up a shovel :) Götz, I was beginning to wonder why no response to my last post, spent a long time drafting it, now see it never got posted, must have timed out. If I can retrace my steps, it's worth working this through, as I'm seeing some dots come together, and others with increased residuals ;)

    Firstly, thanks for clarifying the nomenclature, most useful. Two, and preferably more RC users comparing notes on how this powerhouse software thinks isn't unlike photogrammetry, two minds are better than one in triangulating shared experience alongside the unique insights we pick up with individual experiences. What's missing here is for WishGranter to grant our wish, if nothing more, to dispel misconceptions what's happening under the hood, and ideally, to kick in with relevant details we've not even thought to bring up. Thank you WishGranter for your time in providing feedback!

    Can we confirm this statement, please?:

    In this case, I am pretty certain that Detected Features are the ones in the 2D imagery, whereas Tie Points are the 3dimensional result. So basically in each alignment DFs will be used to calculate TPs, generating a component.


    DFs: 2D and in chache, TPs 3D and in alignment/component


    I question (only musingly) whether TPs seen in the 3D view aren't the same as the TPs seen per 2D image, putting aside for now where they're stored, just to get the terms and moving parts defined. When I grab TPs in 3D view with the Points Lasso, click Find Images, switch to 2Ds on the left pane and 2D on the right, I then enable Tie Points under Image tab, would seem possible that the same TPs seen in 3D are simply represented flat per image in 2D. No? Doesn't make so big a difference, but to the extent they're the same or different data forces a question about where they're stored, one data set stored in cache and/or in the rcproject file within a Component?, or two different types of data stored in cache, the other in the project file. I not only want to know where it's stored, it's useful to know what function it serves in either place. Specifically, stored in cache to speed recovery, but if cache has no influence on future Alignment, then emptying cache would only benefit managing storage (or is there some additional benefit?). Stored in Component/rcproject file, how exactly do one set of TPs in an older Component influence a younger Component, or do they at all? Does a subsequent Alignment only consider the TPs in the previous generation Component(s) that it drew from?

    Can a younger Component influence (harm) an older Component? When we report our belief in good hygiene, not just to avoid clutter, but to not invite unintended consequences having one set of TPs exerting unwanted influence, is that idea founded?

    So do you think that they are still flexible a bit, meaning RC will still improve the cameras relative to one another in one parent component?


    My understanding is that with Merge components only = True, RC either joins Components or not, doesn't generate many equal or smaller Components. I think there's plasticity in the Components it's attempting to merge, but not because of this setting, which would also apply to Force component rematch. The plasticity is controlled per image, or Input, so that if you select all images...

    [img]
    Inputs.JPG
    [/img]

    ...you may already know about the Feature source settings. "Use all image features" provides the highest plasticity, especially if applied to all images, in that RC considers all DFs (or might these be synonymous with TPs?). "Merge using overlap" being the first in that list might indicate the lowest level of plasticity, but fastest processing time, doesn't consider all the TPs in Components, only the TPs and CPs in the images common to at least two Components, that being an overlap. "Use component features", the second choice, would fall in the middle, RC considers all the TPs and CPs in all Components, whether they overlap or not. So, if two Components have overlapping real estate, but totally different images, then this route tests for that condition. This raises a question, what happens when some images in Components are set one way and images not belonging to a Component are set another? Not to parse out every permutation and combination, though you'd have to do that to really get it.

    If you're working in chunks, exporting Registration and importing Components, then "Merge using overlap" seems like the way to go if your Compositions include common images, but then why not combine those to begin with in one Component? Okay, file management might urge you that direction. If your Components share common real estate, but not imagery, then "Use component features" in combination with CPs should get you there. The moment you introduce new images, e.g. data that perhaps hasn't behaved well within image sequences comprising Compositions, then I'd think setting all photos, not just the new ones but also any belonging to the Components, should be set to "Merge using all features". You wouldn't want to lock plasiticity between the new images and any of the images belonging to existing Compositions since how do you know which ones they overlap with (real estate-wise). The first two settings applied to images belonging the Components considered next to the third setting on the remaining images would potentially force the new images to conform to the edges of a fixed chunk of world, soft on the inside, too crunchy on the outside.

    . I think by raising the weight ridiculously high, you eliminate the good influence of TPs in the vincinity, thus making it easier for an error in one of your CPs to do its evil work and thereby stick out more clearly...


    I now believe this statement holds an important clue to limited plasticity in RC via human intervention, this assuming that all your images were set to "Use all image features". Since I often use that setting, though still encounter the stepping and issues south of that, e.g multiple Components, I wonder if increased weight on a CP is considered equally among all TPs relevant to imagery containing a given CP. If each TP hears the same weight, then the TP(s) closest to the CP carries the greatest burden to make the adjustment. Think about ironing a wrinkled garment. If you focus too much on a small area with the iron, you simply iron in the wrinkle or move the fold over a ways, like passing the buck.

    If you encountered a stepped section - wall, floor, or ceiling separates - the user places CPs along the area in as many photos as feature that area, confident a) those features are strong candidates, appearing similar from different vantage points and b) confident of executing with precision in placing the CPs, then RC gets the memo, "close this stepped section!". "Use all image features" may not go far enough. What if each TP in the vacinity were given a "listening weight", as it were, based on proximity. The nearest TPs respond the most dramatically to close the gap. And so we don't simply displace the load onto neighboring TPs, shifting the step, the listening weight drops off with distance, perhaps controlled by the user. Maybe, the issue is localized, maybe the step relates to a huge space where a giant loop has difficulty closing.

    This kind of thing must be at work in RC, in any photogrammetry engine, to work at all, since the order in which an engine loads images would cause all kinds of "wrinkles" if it didn't adapt by spreading the mathematical load. What we're after here is how to enable the user to push the limits. RC is truly amazing, as you say, but system resource alone limit RC, if not the inherent limitations of the software to support greater control over plasticity during human intervention. To your comment about not always being able to go back, I recently returned from Siberia 3D mapping in a salt mine 300 meters down. Because I knew we wouldn't be returning, I strongly recommended my client purchase a fast laptop to run each day's data to protect against data gaps, stepping, and issues with alignment. That worked for two days before the machine wouldn't even allow offloading files without crashing, so I'm flying without a net. I was super conservative and all that data has every camera aligning, but the last day, my client asked if I couldn't loosen up, our last chance to get a long tunnel shot in the bag. I changed my (proven) workflow, switched camera bodies with one of our Russian fixers, her Sony A7SII had a fourth the resolution of my A7RII, but really sweet high ISO, so I was able to shoot from a greater distance, map with a broader paint brush, if you will. Most of this data also worked, but I was right at the edge, gaps, a stepped area, the issues that prompted this thread reared their head.

    Good news, I've now, thanks to you, Points lasso> Find Images> refined image selection> Find Points - pin pointed the problem children, gotten everything to align and without stepping, can go back to the client with good news. I'm unfamiliar with Gradual Selection, but I do hope that WishGranter humors our lengthy dispatches here and weighs in with key facts about what types of features influence Alignment, straightens out any misconceptions, and hears our plea for possible improvement to optimize plasticity/control. BTW, I'm very curious about your work, truly amazing that you were able to get two highly separated spaces sharing those thin floor boards to properly align, and yes, that also speaks to RC's killer algorithms at work. I'd welcome us connecting in real time over TeamViewer or Google Hangouts (former is way better), if you're up for that.

    To Tom's point, and thank you again for taking note, communication is a good thing. Many will say, who has time to read so many words, why all this? I don't turn to forums for my social fix. We're working across the world on lonely planets, at least I often feel that way in my sanctum/bubble. You were one of the folks early on, open book, who I greatly valued and value still to tell me things I didn't know, compare notes, bear down on important steps to becoming a power user worthy of the name. Keep it coming, Götz.

    Best,
    Benjy
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Hi Benjy,

    this is a short first answer in the hope that it will get some attention... :D

    I do not think, that we will get answers from the RC team for this level of questions!

    Unfortunately, that is my experience in the past - there are very few exceptions.
    Bug reports, yes please --- deeper understanding of the software, not so much... :(

    But I am always happy to be proven wrong! ;)
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Benjamin von Cramon wrote:
    question (only musingly) whether TPs seen in the 3D view aren't the same as the TPs seen per 2D image, putting aside for now where they're stored, just to get the terms and moving parts defined. When I grab TPs in 3D view with the Points Lasso, click Find Images, switch to 2Ds on the left pane and 2D on the right, I then enable Tie Points under Image tab, would seem possible that the same TPs seen in 3D are simply represented flat per image in 2D. No? Doesn't make so big a difference, but to the extent they're the same or different data forces a question about where they're stored, one data set stored in cache and/or in the rcproject file within a Component?, or two different types of data stored in cache, the other in the project file. I not only want to know where it's stored, it's useful to know what function it serves in either place. Specifically, stored in cache to speed recovery, but if cache has no influence on future Alignment, then emptying cache would only benefit managing storage (or is there some additional benefit?).


    Just try it out. When I delete, the cache, the the next alignment takes longer due to feature detection, which is presented in the console window. Hence features in images, they are stored in pixel coordinates. I guess they might be called tie points as well, seems to be standard photogrammetry terminology. At least I got plenty of hits in *search-engine-of-your-choice*. And the same ist true for alignment - delete all components and it will take longer

    This is what I gathered from all my projects and forum activities (but I've been wrong before). And the help of course, but that only gets you so far.

    I wish there was some option for medium sized freelancers that included some level of tech support. I also don't understand why such questions are not answered in the forum, by officials that is. Because even IF I had some tech support option, my first step would be to give the forum a skim - very often much quicker than any support request can ever be. So ultimately the old proverb about teaching a man how to fish is also true here...

    Don't get me wrong, once you get their attention, everybody is very nice and helpful. But there is the big ONCE...
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Benjamin von Cramon wrote:
    Stored in Component/rcproject file, how exactly do one set of TPs in an older Component influence a younger Component, or do they at all? Does a subsequent Alignment only consider the TPs in the previous generation Component(s) that it drew from?


    That's the thing: NO Idea. I can only tell you from my experience with the wall. After deleting ALL components, it worked. So it means they do take influence, just to what extent I am not certain. My guess is that ih you hit the Alignment button, RC first looks if there is anything to go on. If yes, it uses it to go from there. Even if you don't change a thing but hit Alignment again, the results will be slightly different, maybe a bit more optimized. I am sure one could find some discussion about that here in the forum.
    If that is good or bad is in the eye of the beholder. It certainly speeds things up, but it is not ideal in terms of ironing out errors...

    Benjamin von Cramon wrote:
    Can a younger Component influence (harm) an older Component? When we report our belief in good hygiene, not just to avoid clutter, but to not invite unintended consequences having one set of TPs exerting unwanted influence, is that idea founded?


    No, I think old components are "as is". That makes sence if you consider: 2D=cache, 3D=component. Even if you do 100 alignments, the features in the images that are the basis for alignment will remain the same. I think even if you deleted the cache, the algorythms are so solid that they will produce the exact same result if the source image is exactly the same.

    I guess a good example how it works would be to look at a 3D headset. Left and right are individual 2D images with many features (as in image content)). The brain then calculates 3D information out of the differences in the way each feature correlates to one another, which would be our component...
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    That all makes good sense, do like the analogy linking the TPs in Component/DFs in cache relationship to the 3D mental construct/2D visual info per eye. In a y-up world, the X/Y values are common to both the DFs and the TPs, our brains hallucinate the Z value, photogrammetry's calculations predict similar results.

    I take it you were already familiar with the 3 Feature source settings, am curious if you agree these largely define the extent of user control over plasticity.

    Benjy
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    Jonathan_Tanant wrote:
    We definitely need a tie points filtering feature. Align first with a high max error, then filter out.

    It already exist from start.
    Common workflow:
    Set Max reprojection error to 4px for example, and align "rough", delete small components and run Align again.
    Decrease reprojection error, and "refine" align again,
    decrease and refine again...

    My common way from 2px to 0.5px. with grouped by EXIF and final refinement with ungrouped images.
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    Götz Echtenacher wrote:
    Just tried something out:
    Two components with overlapping area, but no image in both components.
    Even though I set Merge Components Only to true, the distortion values differ slightly for an image in the source component and the new combined one. So I guess that even with this setting, the cameras still have some 'plasticity' as you called it...


    Merge components only will use only images from components, and fo not count any other images. But yes, it will adjust lens settings because relative positions can me slightly different.

    IF you wan "TRUE" merge only you need to set feature source in images to "Merge using overlaps". it will use same images in different components for alignment. But probably if position in different components will vary RC still adjust coordinates and lens settings too.
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Vladlen,

    This workflow involves little human intervention, which is what's explored here, how to leverage plasticity in RC with settings when doing something manually, i.e. adding CPs. What you describe is largely procedural, nothing against that and great if that yet saves time, and perhaps this addresses the question of how Components influence subsequent Alignment, RC learns from itself? By beginning with a higher max reprojection error, RC provides more give in accommodating overly converged photography, yes?, but then by filtering out these TPs with high max reprojection error and tightening down on that setting in subsequent Alignments, your experience is that RC will take a given set of images further this way then had you set max reprojection error to the final lowest setting out of the gate. Yes?

    Beyond helping RC learn from itself this way, when it comes to true manual intervention via setting CPs, say to close a stepped area as we've described in separating walls, ceiling, floors, would you simply plant CPs in the troubled area then apply this sequence of Alignments with graduated max reprojection errors and filtering between each pass?

    Thanks, much valued here.
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    Benjamin,

    Hmm, learning? RC not "learning" it's a pure math.

    Every image have a tie points, maths just try to "connect" all this tie points in 3D space with desired error. All points that higher error - dropped.
    Secondary Align (without force rematch) try to refine this "matrix" and decrease mean and median error, with dropping bad points.

    If you set manual CP, this is like a "guide" and RC try to resolve math problem when that CP must exist and drop all other points that have higher error. That's why sometime after adding CP you can lost some images.

    RC have different settings for realigning. Default one - use components points, this mean RC use only exist tie points in component. Use all features will use ALL detected features in image in refinemen step. etc.

    If you have bad alignment, this mean RC match wrong tie points. And if you set CP it will recalculate existing or full deteced, to match your CP.

    Set CP to high value like 100-500 will be near similar to CP in Photoscan or CloudCompare/MeshLab Align meshes/clouds.
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    Yes, RC is pure math, learning = Ai, still future, but aspects of our learning are also pure math, so it's a baby step.

    If you have bad alignment, this mean RC match wrong tie points. And if you set CP it will recalculate existing or full deteced, to match your CP.


    So, if you encounter a bad section in the model like this stepped wall we describe, might one set the images containing those bad tie point to Use all image features, pointing RC to detected features (DFs), such that with new CPs and higher weight, RC considers the DFs for selecting new candidates for TPs, but then allow the imagery not containing the problematic TPs to run faster with Use component features?

    With the default weight of a CP at 10, do you in practice crank weight to 100-500 when you're confident of your feature selection?, looks similar from different cameras, and confident of careful placement? Without force rematch, some images might be lost to smaller components or such, but you don't see what Götz reports, super high reprojetion error on those heavy weight CPs? If so, maybe if Götz had changed those problem images to Use all image features, he wouldn't see that problem, new TPs relieve the issue. Is this the logic?
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    For estimate problem.
    Need enable Inspector tool, select point lasso tool and select this part of sparse cloud when such crunk is exist.
    Usually you will see two or more separate islands with connected cameras. ANd this mean no tracks or weak track between cameras in this islands. And as result alignment error and error in mesh.

    And for such shifted surfaces CP mostly useless, much better add more images in between this camera islands.

    If no chance... using select points tool select part with error, Find Cameras, and set CP points on this cameras if good overlap, this will fix erorr. But required too much time for manual work.
    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon
    About time I learn about the inspector tool, will try this to see what's meant with "two islands", but get the idea. I've used that second workflow you described, found it useful to identify the problem children images, plenty of overlap to other images, but overly converged and looking into the distance, which I know invites problems. I inspected these photos and saw they didn't contribute much, so I disabled them, new alignment resolved the issue.

    I'm now clear on detected features stored in cache per 2D versus tie points stored in components and seen in 3D, how then to use Merge using overlap, Use component features, and Use all image features. Then these diagnostics, it's making better sense. Many thanks, Vladlen.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Benjamin von Cramon wrote:
    That all makes good sense, do like the analogy linking the TPs in Component/DFs in cache relationship to the 3D mental construct/2D visual info per eye. In a y-up world, the X/Y values are common to both the DFs and the TPs, our brains hallucinate the Z value, photogrammetry's calculations predict similar results.

    I take it you were already familiar with the 3 Feature source settings, am curious if you agree these largely define the extent of user control over plasticity.

    Benjy


    Hi Benjy,
    sorry, didn't get around to all of your post yesterday... :-)
    To be honest, I haven't played around with that at all and Use All Image Features is the preset...
    Is there an option for a whole component? Because I did not find that...
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    And just to summarize your long post, Benjy:

    I think you lost me there in the ether of the intricate details of how photogrammtry works - the iron analogy. :?
    We need to chat about that as you suggested!
    I suspect that all those intricacies are probably already manageable, if one understands photogrammetry from within, as a mathematician or software engineer.
    But here is the line for me - I am a practical user and in the end just muddling through... :D

    I am glad it worked out for you!
    Your stuff sounds really amazing.
    And also looks amazing, on your website - the brick tunnel with rails, perfekt right back to the last obscure corner!
    How many images did that take???
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Thanks to both of you, Benjy and Vladlen, for explaining the component merging again.
    I remember now that I did come across that in the past.
    Really need to get this into my head, but then there is so much!
    And after all, I also need to do my "real" work still, which is buildings archaeology... ;)
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Vladlen wrote:
    My common way from 2px to 0.5px. with grouped by EXIF and final refinement with ungrouped images.


    Sounds exactly like I imagined it should be possible.
    I tried it once but it only broke up in different components...
    What do you mean by refinement with ungrouped images?
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    By default grouping lens on import by EXIF is disabled in global settings. But must be always On.

    RC will treat all images with identical EXIF settings as 1 lens, and will be used features on all grouped images to estimate this lens settings.
    Usually i have about 5-6 groups. Because i use Zoom lenses and often switch from horizontal to vertical.

    So lets imagine we refine alignment up to 0.5px and have 99% images aligned (if we don't made any mistakes, if did, we was used CPs). So we have 1-6 groups with 1-5 lens settings. But zoom also change lens and as result undistortions. So before final refinement, we select "Images" word in 1D view and choose Ungroup.
    And run Align again. Now all cameras treat as independent lenses ans RC can finally refine small deviations. And as result (not always, can't in rare cases) final alignment is more precised.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Could somebody give me a heads up about Force Rematch, what exactly does it do?

    And where do I set Use All Image Features or Use Component Features?
    I can only find it in the images - is that the same as for the alignment?
    But why would I set this in the Images Menu and not the Alignment Settings?
    And the standard is All Features anyway, so I don't see where I might have gone wrong.
    Confused...

    About my heavy weight CPs:
    I just meant that it can be used to identify CPs on individual images, that have been misplaced.
    There have been cases where I got a little speck on a stone wall entirely wrong and also placing them EXACTLY right is sometimes also trickier than at the first glance. Especially if there are dozens of CPs with many images per CP.

    I am starting to suspect that our problems, Benjy, might be also down to unusual object characteristics.
    The ideal object is a smooth granite rock, that you can put on a white turntable and shoot away in perfect conditions.
    Our objects differ greatly. For one thing, they are concave but convex (when I am indoors) and for another, they are usually dark (in your case absolutely). Lighting them absolutely evenly is impossible, even with almost unlimited resources. Also, we have wildly prodruding parts of the objects, which come very close to the camera - bad for light and bad for focus. I am aware that all this can probably be resolved in some way, but it makes it much harder than other objects (the ones in the ads). :D

    Anyway, great discussion, I've learned a lot!
    Thanks Benjy for initiating it and Vladlen for providing the "professional" background.
    Keep it going!
    0
    Comment actions Permalink
  • Avatar
    Vlad Kuzmin
    Force rematch - kick out all calculated settings and run from scratch.
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Vladlen wrote:
    By default grouping lens on import by EXIF is disabled in global settings. But must be always On.

    RC will treat all images with identical EXIF settings as 1 lens, and will be used features on all grouped images to estimate this lens settings.
    Usually i have about 5-6 groups. Because i use Zoom lenses and often switch from horizontal to vertical.

    So lets imagine we refine alignment up to 0.5px and have 99% images aligned (if we don't made any mistakes, if did, we was used CPs). So we have 1-6 groups with 1-5 lens settings. But zoom also change lens and as result undistortions. So before final refinement, we select "Images" word in 1D view and choose Ungroup.
    And run Align again. Now all cameras treat as independent lenses ans RC can finally refine small deviations. And as result (not always, can't in rare cases) final alignment is more precised.


    Thank you so much for this one!
    I tried grouping a couple of times but was never satisfied - I also use zoom lenses.
    With your method it finally makes sense!
    You really opened my eyes about the gradual process.
    I guess Benjy will agree once he's up... :-)
    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher
    Vladlen wrote:
    Force rematch - kick out all calculated settings and run from scratch.


    So the same effect as deleting all components?
    0
    Comment actions Permalink

Please sign in to leave a comment.