Downressing inputs FTW?
I've been working on capturing some complex twisted trees and struggling with my current machine specs (32Gig RAM, RTX2060). So as a test I've started downressing my full frame images from 6000x4000 down to 6mp 3000x2000 for a few tests and it seems to be doing just as good of a job and not hitting memory limits. I'm not sure if maybe m computer just wasn't displaying the larger solves correctly but I feel like there is something fundamental I'm missing in understanding this. Maybe something to do with the fact that I took allot of the photos at the closest possible proximity and in portrait so it captured more resolution than necessary to describe the small structural detail on the tree.
Is it a common practice for anyone to downress the inputs? Would this just be a by eye adjustment to see how low res one can go before loosing the detail I need?
After these tests I'm starting to wonder if I should just set my a7ii camera to record 6mp images instead of 24mp if I'll be downressing them regularly.
-
In this case of downressing you can use the smaller images for alignment and the bigger images as a texture layer. More about it you can find in the application Help under Image layers.
So basically it is a common workflow and it depends what you need as a result.
-
Aha, thanks for the advice!
I'll be looking into this when my new system arrives early next week.
Please sign in to leave a comment.
Comments
2 comments