Notice: This page will be redirected to the Epic Developer Community. Spend a few minutes to get familiar with the new community-led support today.

Hardware Optimisation & Benchmarking Shenanigans




  • Avatar
    Götz Echtenacher

    I agree with ivan's second to last sentence!  :D

    For a first try, I wouldn't overcomplicate it and take something that is already there.

    Like ShadowTail suggested at the beginning with 36 images.

    It's about working out the systematic approach.

    Wow can come later.

    ShadowTail, I like the object, but how is it about the copyright?

    Also, wouldn't a round object (as in shot in closed circles) be better for the beginning, since there are less problems at the (non existing) borders?

  • Avatar

    I'm late to this thread, what an awesome development, count me in. I'd have to check on rights, but recently captured in a Siberian salt mine, highly occluded environment, not so much the mine walls which are smooth, but sections with human stuff, a burly mining machine, an area where miners eat lunch, lots of texture-rich tools lying about, old dial telephone on the (psychedelic) wall. For Wow factor, this place hits you in unexpected ways, surprise, complexity, natural beauty, human authenticity, while also presenting the right challenges, things, interior space, occlusions, etc.

    This image comes off the internet, gives you an idea.

    The source data comes from 42 MP Sony A7Rii w/ 21mm Zeiss Distagon prime, we'd simply downscale to 10 MP. I can share an animated clip out of UE4 offline (password protected) to show how some of these scenes from RC appear in deliverables.

    In any event, count me in to participate with benchmarking. I've been planning an upgrade, am aware how key benchmarking is, especially when tied to specific apps and separate functions within.


  • Avatar

    You are welcome.  
    Don't be fooled into thinking we are experts or have the remotest clue what we are talking about or doing.
    You are more than welcome to add your 2cents.

    Fear not, the intention of the benchmark is as you hope for.

    There has been a lot of talk from me, and not much evidence of my web-based results page.  I have struggled immensely with that part.  So many solutions which claimed to be able to offer the ability to upload and then show data were failures and did not deliver.   I think I have it cracked.. Mostly.

    This is now as a good a time as any to share where I am.  There are currently 2 parts.
     1)  the upload, and 2) the public results.

    Getting the publicly viewable results shown in a clear and presentable manner that can be analysed and interrogated was a important part.
    Here is where I am with that.  The data is drawn live from the google spreadsheet from which results are uploaded too.
    And updates accordingly.   The pie charts etc are not final and I will change the metrics displayed/used.  It's just a test to get it working.  And will have more useful data shown for your viewing pleasure..

    Note. the contents are fabricated by me changing the rawresults.txt files that have been uploaded each time, and don't represent real results yet.

    The uploading part is currently not as pretty. (which will change.)  is here

    I'd very much appreciate anyone to try uploading some data.   Using the rawresults.txt that is generated by the benchmark.  please use that file rather than the results.txt as it will add garbage to spreadsheet, I have not yet added code to reject the incorrect file.   Yes the results will be kind of useless as we are all using different datasets for now,  however for the moment I need help with checking that the uploaded process is working correctly, and the results are displayed properly.

    known issues.

    1) Results are show instantly on the upload page, however can take a min+ to appear on the pretty public results.   And you will need to manually refresh the page for your uploaded data to appear.  This is a limitation of the platform.  It caches data on the server to save on resources.  Poor google and their lack of resources...

    2) Works on chrome, I do not know about other browsers.

    3) The Rawresults.txt must be selected for the upload or terrible things may happen  (not that terrible, but will make a mess on the spreadsheet with garbage data)

    4) The chart is full of made up data .   For now the fact that results can be generated, uploaded,  displayed and analysed is the important part.

    5) You can download the results for your own analytical pleasures, there is a hidden button next to the word " Total."


    As always.  Feedback is really appreciated.



  • Avatar
    Götz Echtenacher

    Hey ivan,

    awesome post!

    I say YEA      :D

    There were several attempts like this but it really needs somebody to gather all the strings.

    By looking at each stage separately, it should also be possible to figure out which HW component can improve which step of the process.

    Do you know of any tool that can monitor HW usage during processing?

  • Avatar

    Hi Gotz the hwinfo tool mentioned above can also generate log files of various system resource usages over time. Which can then be plotted to graphs. Going deeper than that are profiling tools but this starts to get too complex and is really of use to the coders of the software.

    The new windows fall creators update released today now also finally monitors gpu usage in task manager along side ram and cpu.

    I am yet to test.

  • Avatar
    Götz Echtenacher

    oh, that is nice - do you know by chance if that's also true for dinosaurs with win7?

    I have hwinfo installed and use it frequently, so that should not be a problem - I must have missed it in the depths of your über-post!  :-)

    Awaiting your image set! Maybe we could start with a small-ish one to iron out the chinks?

  • Avatar
    Götz Echtenacher

    Something else to concider:

    RC seems to have a randomness in the alignment process, which means the results can vary, the amount depending on the image set. So I guess it would make sence to run the alignment more than once after deleting the older component...

  • Avatar

    I can possibly provide a very small dataset of only 36 pictures for an object that should all align into a single component using the default settings for RC.

  • Avatar
    Götz Echtenacher

    Sounds like a good start!

    I guess it would make sense to ask RC if they can host those images so that people can download them any time if they want to.

  • Avatar

    Thanks Shadow & Gotz

    - RC has some demo data already, which is accessible via the 'help' files.  It contains both Laser and regular camera data. 

    I wonder if this would be suitable, and I also wonder if the CLI dataset they have available, may be more suitable, and even be possibly accessible to us.

  • Avatar
    Götz Echtenacher

    Hmm, has anyone looked at those yet? Are they practical?

  • Avatar

    sounds interesting.

    I think its important to benchmark each stage. for alignment, reconstruction part 1 ie depth maps (gpu), part 2 creating model (cpu). then texturing. and maybe even simplify etc...

    I'm mostly interested in seeing part 2 of reconstruction. since its by far longest part for my scenes.

    but I'm not sure how useful seeing a few hundred photos will be. you won't see how ram or ssd's really effect speeds it all until you get into 2500 - 5000 photos. that will take a long time to benchmark though but would be more interesting.

    this sounds like it could all be done with cli scripts and the demo version.


  • Avatar

    You are correct chris - gaining data from each stage is important.  

    Do you think CLI scripts can work with the demo ?  If so that would definitely be the way forward.

  • Avatar

    I'm pretty sure all the cli commands except export work on the demo version.

    I have no idea if all the right logs can be saved. i would assume so.

    it would have to all be done in one go though, as you can't saves any of the parts.

    maybe it worth seeing if all the right information is saved in the logs by just running the start button.

  • Avatar
    Götz Echtenacher

    Hey guys,

    why would it be neccessary to use CLI? I think we are all motivated enough to search out all the log files and copy-paste the info into a table, right? And I'm not sure what would happen if we install the demo parallel to the normal versions. I don't think I'd be willing to mess around with that...

    Chris, you are probably right that a larger image set will give us better or at least different results. But I still think it would be important to try and optimize the method with a smaller project first, just so nobody gets distracted by super long processing times...

  • Avatar
    Michal | CR

    Hey guys,
    that is a great initiative. Thank you very much. We like it very much in CR and an benchmarking tool has been on our minds for a long time. We want to support you in this initiative as much as we can.

    There are more ways how to do that.

    One of the way is : In the workflow tab in the settings here is "Progress End Notification". Read the application help for the detailed information. You can attach a bat file to the notification that will do the job you need. Somebody needs to make the scripts etc.

    I think that the CLI is the way as it work in the demo. It can works as follows:
    1. Clear cache
    2. Export and backup global settings "-exportGlobalSettings"
    3. Import your global settings "-importGlobalSettings" that will include also the "Progress End Notification" hooks.
    4. Run the tests

    I think that the cache clearing as well as the identical global settings are the most important so that eavrybody will have the same starting conditions.

    As you mentioned already the dartaset is also very important. I would recommend to use a dataset of 300-500 images of ~10Mpx resolution of some relativelly complicated structure, however, as it will be exposed publicly, it should have a wow factor. We can try to find some of our but it can take some time. If you have some the go and use it.

  • Avatar
    Götz Echtenacher

    Hey Michal,

    thanks for the encouragement!

    Is it possible to install a demo parallel to a "proper" license?

    I don't have CLI capabilities here on my system...


    Ivan, we're waiting for you now!   ;-)

  • Avatar

    Hi Michal

    Many thanks for the supportive words.

    I was unaware of the progress and notification was able to do that,  I shall take a good look at that and read up on scripting required.

    I agree that the dataset should have a wow factor as it will encourage people to use it widely, and have a positive impact on the overall image of the software.  Something impressive is required, visually and technically.

    - Suggestions:  What do people think would be best ?

    Humans ( would need someone with a multi rig setup to donate the data),  Getting a really good capture appears difficult.

    Architecture, internal or external.  There are some very beautiful buildings about that are accessible, which can have very nice intricate details textures and features.   My testing has had some awesome results.

    Ideally a combination of Nadir(above) and ground images makes for the best solution.   Which I have found to be tricky as uav's and local authorities don't always mix nicely :). 

    Statues/Monuments can work well and can look nice.

    heritage items/scenes usefully have fantastic detail and textures.

    Maybe something organic could be good.  

    The subject matter is endless..


    Another option I was also thinking of was a test scene setup (example from dpreview), with various objects/models placed on it.
    Similar in a way to this, however totally 3 Dimensional  and a lot more visually interesting.  

    Interesting objects could be arranged from a variety to cover most subjects people would be interested in.
    Such as a lobster shell,  some wood & stones /leafs , highly detailed miniature architectural models, engineering components etc, and 3D measuring guides.

    If created well.  Not only could the benchmark scene be visually impressive and be used for testing performance.  It could also be used to compare the effect on changing settings/quality/accuracy etc in a measurable and controlled way.  Even between software revisions.  Photographing it would have the bonus that images taken could be very good and accurate due to 'studio' conditions.  Avoiding occlusion between objects could be very tricky unless well planned out.   Carefully selecting the various components would also take some thought.

    Perhaps I'm over complicating the situation. ?

    Anyones input and suggestions are greatly welcomed.

  • Avatar

    An object like the one shown in the linked video may be something that has the required WOW factor.


    I have tried to do a reconstruction from that video and it turned out absolutely amazing despite the relatively bad quality of the source material.

  • Avatar

    Relief carvings etc, as Shadowtails example are indeed impressive works of art, and the software does a fantastic job of extracting the depth from them.  However I don't think it shows the true ability of the software, and could give the impression it is about 2D depth mapping and not full 3D scene/object recreation.  

    The end result needs to be technologically impressive as well as visually complex and interesting.

    Using someone else's work is definitely no go from the start.
    We definitely will be able to create our own data.

    To ensure the software is not being held up with strange issues, knowing the conditions of how the data was captured is important.  Full Exif data etc.

    Finding a versatile and impressive subject/s that will capture the essence of what the software is capable of shouldn't be too hard with a bit of brainstorming.  I do not believe a small dataset of say 36 images will be enough to stress systems realistically to gather the data we require for a benchmark.  Nor will produce a adequately high quality result to represent what is possible.  It is very impressive what can be done with a few images,  it is even more impressive what can be done with a larger number taken carefully. 

    Gotz's point of closed circles makes sense, and having a full scene would be nice.

    With regards to over complicating :)  It is indeed a good idea to be able to walk before you can run.   
    That said, I feel if a job is worth doing, it's worth doing properly.    If rushed and poorly thought out you don't achieve what you set out too, and end up with sub par project that is lacking in many areas.  A good balance is important.  If you bite more than you can chew there is the risk things do not get completed.

    I'll have a ponder over the next few days.  Keep the suggestions coming :)


  • Avatar
    Götz Echtenacher

    Hey ivan,

    good point about a worthwhile project.

    I just wouldn't want this project to peeter out because nobody gets around to doing a project with 500 images just so.

    So I would say we can do it parallel - looking for a nice image set that already exists and the result is known, and whoever feels like it can go out and shoot to impress!  :-)

    Michal, how about the stag in your showcase - would that be suitable/available?

  • Avatar
    Michal | CR

    I'm faraid that the stag will not be possible. I'll try to find something and if not then grabbing a camera and capturing any tree trunk can be a start :)

  • Avatar

    I think having set of images from a uav flying a double grid, would be pretty good.

    you can get a pretty decent model from 300-500 photos from that.

  • Avatar
    Götz Echtenacher

    I just tried to combine WOW with ALREADY THERE.  :-)

    The stag would fit that, in my opinion...  ...but that's not possible so moving on...

    Who will be first?  :D

    @ chris:    I think that the internal processing for a model like you suggested might be quite different from an "all around" one, so I guess it would make sence to have one like that as well, especially since it is a common application for photogrammetry...

  • Avatar

    I would actually suggest having multiple objects ranging from near-perfect source images to low quality / noisy / video source images to show what magic RC can do even with bad source images.

    Ideally it would be the same object because that way they can be compared and the differences shown.

  • Avatar

    That does make sense Shadow, and not a bad idea at all.   Although it would require a good amount of strict control to ensure set variables are consistent between the shoots. - More of a additional test to see how the difference that image quality makes.  It could easily become quite in depth fast and ties into what i was suggesting with the test scene.    And yes Chris also UAV flights can produce great results.  

    Jpg vs Raw
    Resolution differences
    Quantity of Images
    Various ISO levels
    Various Focal Lengths
    Sensor Sizes
    Bayer vs Foveon
    Optical Stabilisation on/off
    ...... the list goes on

    These are just some of the things we know make a difference, but to quantify them would be really nice.


    I have been making progress. - On the technical side.  Start with the least fun and trickiest bits.

    I now have a script/batch file that loads up any set of images, goes through all the motions and spits out the following below as a txt file. No other apps required, just click and go. 

    I think the output is *mostly relevant.  - Have I missed anything obvious ?
    I couldn't interrogate the hardware deep as I wanted without external apps.  - However I think if that can be avoided it will be far better.

    Start time 1:07:29.43
    Alignment Time 12 seconds
    Reconstruction Time 48 seconds
    Simplification Time 6 Seconds
    Texturing Time 124 seconds
    End Time 1:10:45.11
    Name=Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
    NVIDIA GeForce GTX 1080 Ti
    Capacity       PartNumber                       Speed
    8589934592 CMK16GX4M2B3200C16 3200
    8589934592 CMK16GX4M2B3200C16 3200
    8589934592 CMK16GX4M2B3200C16 3200
    8589934592 CMK16GX4M2B3200C16 3200
    Samsung SSD 850 EVO 500GB
    Size 500105249280
    Windows Version
    Pagefile Peak Usage


    Next stage is to parse the data onto a easily uploadable online *database*, which can then nicely display the results to us all. 

    Hopefully next week, we should have something to test out.


  • Avatar
    Götz Echtenacher

    Hey ivan,

    great work!

    Is that CLI now?

    Would it be possible to split the processes up into all individual steps? E.G. depth map calculation has different HW needs than modeling etc...

    What more info would you want? I think it shouldn't be too complicated either since you can never have 100% comparable setups anyway. It depends on so much! I think what ShadowTail and you are aspiring to is it's own research project!  :D

    It would be useful to also see how much which component has been used but I guess that's what you mean about deep HW interrogation...

    Could anyone answer my question concerning demo parallel to Promo?

  • Avatar

    Yes It's CLI based, so end users should get a zip file with pretty much the following structure inside.

    Images Folder
    Benchmark.bat (contains all the script and code)
    Settings.rcprog (contains variables required by the script to be used in the application/benchmark and makes no permanent changes)

    Results.txt/csv (Created as the bat file is ran - this will need to be uploaded to the database)
    CompletedBenchmarkScene.rcproj (created when benchmark finished)

    I am exploring a different method also.  As cli does make things potentially tricky if using a the promo, however being able to control all the functions is very handy indeed.  It's all work in progress.

    Parsing the exported data is testing me as multiple hd's/gfx cards/Cpus/ram sticks, can create extra lines and shift the results about so the results structure is a little dynamic depending on the system, so I need to figure that out.  As well as some pathing issues.

    For the moment, the substages within each calculation are not recorded, however I am working on that.  However ultimately I think the required data can be extrapolated without those... I think.. maybe..

    Recording the % of CPU/Ram used as a timeline is possible, however it makes the results file huge & complex, as poling is required to be captured constantly throughout the process and whilst interesting you just get a list of 10000's of numbers, you can get a better visual representation of what is being used at certain points by playing a game of watch the task manager :)

    :We do indeed need to be wary of not undertaking a PHD in image analysis. :)

    :Re the Promo/Demo question, If I recall It did not pose a problem for me when I tried last.  Things may have changed, so at your peril... 


  • Avatar
    Götz Echtenacher

    Hi ivan,

    so you think in MIGHT work even with the Promo?

    Maybe we could sway Michal to provide testers with a short CLI license, that would aleviate this problem.

    I get what you're saying about the CPU usage. Would it at all be possible to thin it out by using only every, say, 100th or 1000th value and ditch the rest? But it's your call since you do all the essential stuff. It's really great that you are putting in all that work.

    I am planning on providing an image set as my contribution. I use a 12mp camera, so that would also fit Michals suggestion of around 10 mp. It's not high end at all but we are not trying to create the best model ever but a sound basis for a benchmark, right?


  • Avatar

    Benjy - what a interesting subject - I can indeed imagine that such a environment is quite surreal and beautiful in it's imposing and harsh ways.  It would be great to see :)

    I'd imagine those machines would work brilliantly due to the dusty and dirty environment giving a lot of texture.  However they would also need a lot of images to avoid misalignment bugs for such a scene.

    I found a lion skull (as you do) thinking that would make a exciting subject.. now not so much.. :)

    I also have exactly the same capturing equipment in my arsenal, so can vouch for the results that are possible.
      - Off subject one thing I have found is the software does not support the sony raw files so have to pre-convert them to tiff or similar beforehand.   I did at one point manage to get the software to read them, however I believe it was extracting the jpg preview from within the raw and not the true raw image data itself.

    Gotz - I don't think it will work directly with the promo (Part of the reason the promo has the more accessible price over the full editions is the fact that automation is disabled), however I am pretty sure I installed the demo alongside when I had the promo installed, and then was able either run side by side, or just uninstall the demo after and the promo would resume fine as before.  It was a few months back so can't remember exactly - I'm pretty sure it worked fine.      For the moment I won't be adding the cpu% stuff as the parsing of the data & coding is testing me enough as it is.  I have as a proof of concept got it working - dealing with the output's is another matter.  So in time maybe. 

    Frustratingly I am in between  systems at the moment and am awaiting a new workstation, however will take a week+ to be built/arrive, so cannot test at the moment.  


Please sign in to leave a comment.