Faster reconstruction time

Answered

Comments

16 comments

  • Avatar
    Götz Echtenacher

    16 gb is definitely not the problem. Imanage much more with my puny setup (see profile).

    Do you only encounter it with this one project?

    If yes, I would try starting from scratch, if it's not too much work. That works surprisingly often. In a pinch, you can export the registration and import it as component...

    1
    Comment actions Permalink
  • Avatar
    Lucia CR

    closing other programs/applications when processing in RealityCapture can help speed up the process

    0
    Comment actions Permalink
  • Avatar
    lukasas74

    Hahah Gotz, i see your build, but still you have more rams. I'll try oc'in my ram and cpu. I noticed one thing, I have two different systems and licenses. One reality capture version is steam and other retail (from website). I tested one scene with same photos and same settings. My laptop lenovo y700. i7-6700hq, 16gb ram, somekind ssd and 960m with 4 gb of video ram. My laptop did everything faster. That gave me some derp moment. Maybe my workstation is somehow off, I'll try adjust it in bios. Forgot to mention that I have steam version on laptop.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Puh!    ;-)

    Full or overflowing RAM is a sign for a bug - usually RC should only use physical memory. At least that is what the DEVs said until recently.

    That would be really interesting to see what is going on there. Is it possible something else uses up a lot of RAM on your workstation? Let us know what you found!

    0
    Comment actions Permalink
  • Avatar
    Mike Simone

    700 photos is not enough to cause ram issues unless there is a bug like Gotz mentioned.

     How many max features per image do you have set? the higher you set that the more ram you are going to need.. (If you are going north of 100k on 16GB)

    Also remember GPU is only used on 30-50% of the reconstruction time. Your bottleneck normally is CPU but you really wont save too much time there IMO.

    0
    Comment actions Permalink
  • Avatar
    Jonathan_Tanant

    Mmmm 16 GB is not that big in my opinion - I have 64 GB RAM on my laptop and 128 GB RAM on my workstation. I still have out of memory errors on my laptop sometimes. I never had this on the workstation :-)

    I think that a fast and big SSD is really important, for project AND cache. Fast for performance, and big for not having to clear it every 2 days if you are doing some heavy meshings (depth maps take a lot of space in the cache). I have a 2TB SSD for that.

    Of course, a proper workflow helps a lot, do not align more than 2,000 pictures at a time. Work in small clusters, then align your clusters together. This makes a better use of resources.

    With 700 pictures you should be good to go. As said before, If you run out of memory, lower your sensitivity and/or raise your overlap, it should help. The preselector count should have an effect too. Not totally sure about this, but I think that the RAM usage (as well as the CPU during alignment) should be roughly proportional to the features count preselected squared, which makes sense, because you have to basically compare all features to the others to find matches. 

     

     

     

    1
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    I think there are two issues here. There is a memory limit for extreme numbers of images which is also dependent on, as Jonathan says, the number of tie points and maybe also the preselector. I think there is a post somewhere here that is a bit more specific about where those limits are. Completely independent from that (or probably mostly) is the issue where RAM usage starts to inflate and "bleeds" into the virtual memory. This seems to be a bug but I am not sure if the causes for that are all known yet.

    0
    Comment actions Permalink
  • Avatar
    Mike Simone

    This is directly from their HW Requirements:

    Memory consumption:

    The RealityCapture application uses state-of-the-art out-of-core algorithms for almost every task. Therefore, you do not need to worry about computer memory. 

    You can easily register unlimited number of images/laser-scans on a single machine. However, it requires a specific workflow using components. In such case 16GB should be enough for thousands high (36-80 MPX) resolution images. You can find more information about components workflow in the application Help, section Using components.

    To align unordered set of images into one component you will need a certain amount of RAM. The consumption depends on a count of images (irrespective of their size) and a count of features per image. If you restrict the count of features per image from default 40K to 20K, you will double the count of images which can use the same amount of RAM.

    Meshing, coloring and texturing are completely out-of-core. Hence you do not need to worry about the computer RAM at all. This means that once you have even one million images or scans registered together, you can create a mesh and a texture on a machine with e.g. 16GB of RAM without any performance loss.


    Its not about the preselector settings, its all about How many images and the features per image.

    If you have 16gb of ram and 1000 images, dont go over 80k for features per image. Using 32GB of ram I run 80k features per image on 2000 images. 

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hey Mike,

    thanks for that!

    I guess they added that aomewhen in the last years...   :-)

    Anyway, in the help it says something else about the preselector:

    RealityCapture Help

    "This is the number of features that will be used in alignment from the detected ones. Optimally, set it to 1/4-1/2 of the detected features."

    The problem with this statement is that I regularly get images with more tie points (not features!) than the preselector. That should not be possible according to the help...

     

    0
    Comment actions Permalink
  • Avatar
    Lucia CR

    tie points and features are not the same, that is right,

    however, we do not see any contradiction in what is stated in the Help here, using preselector does not mean that there will be less tie points

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Dear Lucia,

    thanks for replying!

    Nobody was trying to criticize anything, just trying to understand. So it would be nice if instead of defending, you would clarify a bit more.

    My main point was that Mike stated preselector has no effect (at least that's what I read) but in the help it says that preselector is the number of features that will be used for alignment. I assume that is on a per-image base. From that, a new question arose in my mind: So if there are only, say, 10k features at RCs disposal for creating tie points (as isn the value for preselector), how can there be more tie points than that? That was the reason why I suspected a contradiction. Are tie points created independently from features?

    0
    Comment actions Permalink
  • Avatar
    Lucia CR

    Hello again,

    this was just a bare statement of what I was eligible to say at the time, as usual. I am afraid you will need to come to terms with not geting such in-depth responses as you press for, when it comes to RealityCapture "insides". I believe you can understand that.

    Tie points are basically a subset of features. Number of tie points per image should always be lower than number of features, if it is not, it can indicate an error. The preselector is used mainly for finding images that see the same.

     

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Of course I can understand that you can't reveal any secrets about the "insides"! Neither do I want that nor did I get the impression that I was pressing for exactly that. On the contrary, I keep saying (in other threads) that neither can we expect those insider secrets nor do they interest me.

    With one exception, and that is in which way settings influence the result and by extension performance (relevant here in this thread). I do not think that this is asking too much.

    The whole debate only started out because there some unclear phrasings (to me anyway) that I wanted to clear up (or try to).

    Mike said Preselector number doesn't matter for performance (in this case rather RAM usage), which sounds logical because PRE means in my understanding something like temporary or preliminary. That would mean that the whole number of features should be used after the initial number defined in Preselector. In the quote I found in the help it gives a different impression as in that it sounds as if only the number defined in Preselector is used. That again lead to the question why there are more tie points in some images than the Preselector.

    So to boil it down I think the question remains: Will all features defined by Max features per mpx/image be used or only the number defined by the Preselector features? I really can't see why this would be considered an essential "insider information". If it really is, then please say so and I shall leave you in peace...   :-)

    0
    Comment actions Permalink
  • Avatar
    Lucia CR

    I am sorry, but I am not allowed to say more in this matter

    0
    Comment actions Permalink
  • Avatar
    Jennifer Cross

    Memory will matter for performance... even with small projects I see RC use 16-20gb ram and some of my larger models have use 70gb or so.  Not having the ram available will mean it either has to use fewer features or spend more time paging data to the cache.  NVME flash disk will reduce the effects of paging since it is so quick, but if you are paging to a physical drive, the penalty will be significant. 

    I just wish RC would move more of the processing back into the GPUs - I seem to be seeing less and less on the GPU and more time CPU bound these days. (but then I have 2 gtx1080s so maybe the gpu stuff just gets done more quickly now) 

    1
    Comment actions Permalink
  • Avatar
    Ben Brown

    "Meshing, coloring and texturing are completely out-of-core. Hence you do not need to worry about the computer RAM at all. This means that once you have even one million images or scans registered together, you can create a mesh and a texture on a machine with e.g. 16GB of RAM without any performance loss."

    We regularly see our memory maxing out (128GB RAM) when reconstructing large projects (5K+ images). Is this statement still valid?

    0
    Comment actions Permalink

Please sign in to leave a comment.