Features <Different values> locked out

Answered

Comments

2 comments

  • Avatar
    Kruzma
    Hello Benjamin,
    Features field is read only - you can't change it by hand, 'cause features are detected automatically.
    You probably meant 'Features source'. Am I right?
    This field should be available. I tried it many times and it worked fine.

    If you change features source to something different than 'Use all images features', there should be an icon next to your image in 1Ds indicating this state. Can you confirm that you don't have any of these icons next to your selected images? :
    icons.PNG
    0
    Comment actions Permalink
  • Avatar
    BenjvC
    Big thanks, Wishgranter. I must have split out the Feature source setting at the Images level by mistake, locking me out from changing that at the Component level, now clear that the former acts globally, the latter locally. Components now merged and looking awesome.

    Question about Feature source setting, do these three settings only affect how deeply RC works on features and thus are simply about controlling processing time, or do they also determine whether a Component's point cloud remains flexible or not?

    My understanding of RC is evolving next to my understanding of SFM, have read with interest posts illuminating how the quality of Ray of Sight lines reflect accuracy/problems in triangulation, something also explaining why residuals are sometimes so high when I'm certain I've placed tie-ins accurately. You pointed me previously to set Force component rematch to True to overcome the automatically generated features, which accounts for my progress, but I'm yet unclear how best to approach workflow with regard to front end steps like whether or not to Group calibration by exif. Since I use the same body/prime lens for all imagery, I'm tempted to enable this, but is this only about speeding processing, or will the solution reflect less distortions if RC views each image without that one-size-fits-all setting? Did I read somewhere that False can be used to get RC learning what's happening in a small handful of images, then go back to True to render smoother surfaces? Does it matter what kinds of images are used to provide this calibration?

    Any other best practices aimed at avoiding making RC's work more difficult, e.g. what extent of overlap between two components w.r.t. shared images and/or shared content helps RC versus creates work to resolve differences in distortions? Sorry if these questions lack clarity, admittedly struggle to grasp the concepts, much less convey them properly. Thanks as always for patiently responding.

    Benjy
    0
    Comment actions Permalink

Please sign in to leave a comment.