Hardware Optimisation & Benchmarking Shenanigans

Answered

Comments

101 comments

  • Avatar
    chris

    Ivan,

    looking at those logs "Reconstruction Time 48 seconds"

    do you know if there is a way that can split that time into the depth map part and model generation parts.

    might not be possible. 

    but we won't get separate gpu and cpu score. just a mixed gpu+cpu score which is less interesting for me.

     

     

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    Indeed, learning from the log how a particular gpu, cpu, etc. influences performance is the point of the exercise. Maybe, that PhD in imaging is precisely what's needed here ;^( someone write a macro to make a snip in time lapse at intervals of Resource Monitor and a gpu monitoring utility like TechPowerUp GPU-Z, then a way to glean values from the changing rasters... This appears to call for heavier lifting than off-the-shelf tools support. Definitely over my pay grade to properly conceive. We need a hero.

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    Hmmm, that gives me a thought. Whatever happens within these cpu/gpu monitoring apps that ends up getting rasterized to a set of graphs, what's actually needed is access to the discreet samples driving those functions, add to that a routine for averaging and comparing values. We're not the first to talk about the relative worth of benchmarking with off-the-shelf tools, the functions running within specific apps calling for more granular insights. If someone approached one of these developers behind something like GPU-Z, TechPowerUp, and pitched them on the concept we're after, but this with a broader value to them of vastly improving the utility of their app when pointing it to track a user-specified app, RC in this case, and generate this awesomely informative log, believe there's actually a strong selling point here, especially for manufacturers vying for sales, to take benchmarking to the next level. I personally don't mind taking a stab at making contact and opening a dialog. Maybe, that's all I do, handing that dialogue off to Ivan, Michal, whomever, to present an orderly list of specs we'd want this app to perform to. TechPowerUp is just an example, if it's the right idea, we should generate a prioritized hit list. Sounds a bit ambitious establishing momentum, but I'm game putting in a little time toward this end to fly it up the flagpole.

    0
    Comment actions Permalink
  • Avatar
    chris

    this might just be easiest if we just ask rc team to write a bit more info to the log. 

     

    if we can have reconstruction time split in to depth map generation, and model generation. then we should have all that we need.

    0
    Comment actions Permalink
  • Avatar
    ivan

    I have already contacted the dev's regarding this.  It's the weekend - so let's be patient :) - I'm sure it will be possible.
    The application is aware of this stage, so theoretically it shouldn't be a issue.  I have put a placeholder in the code for it.  Or I can extrapolate the data via more tricky scripting which isn't so elegant.

    I have tried quite a few ungraceful things, some using 3rd party apps including gpuz (which is free to distribute uncommercially, modifications cost $),  I think the way forward is to avoid any external applications, to avoid legal issues, the complexity of working with the different way each additional app handles data, and the issue of keeping compatibility cross versions.

    I also think it would be improper to contact 3rd parties even if the intentions are good :)
    The capturing reality team likely have there own agenda, and I do not believe it would be professional of us to overstep/act on their behalf.

    All this could be done in app, however development time is likely better spent on other new features etc.  It maybe on the roadmap somewhere down the line..

    Everything is currently achievable via the app and some crafty scripting, even gpu monitoring.

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    That does make much better sense keeping this within the app. Since with every export of an asset RC alerts user of stats being sent their way, couldn't this performance data be collected alongside system specs, the comparison among all users leveraged to provide the benchmarks? Right, development time better spent on new features. 

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hey Benjy,

    awesome that you offered to share your salt mine stuff!

    I have seen the staff room and I agree this would be an excellent choice.

    It's rare, quirky and an interior, which are rarely seen and poeple often seem to have difficulties with them. So that could contribute to showing that it IS possible.

    I'm somewhere in between ivan and Benjy on the approach, but since ivan is writing everything, the call is entirely up to him. The only issue might be, as chris pointed out, that the CPU and GPU parts should be separate to maximize the benefit.

    Anyway, thank you again, ivan, for doing all this!

    0
    Comment actions Permalink
  • Avatar
    ivan

    Good news 

    The benchmark now records the time taken for the following stages

    1) Alignment
    2) Depth map reconstruction stage (GPU assisted) 
    3) Model reconstruction (Cpu)
    4) Simplify
    5) Texturing

    It was a case of me misunderstanding the effect of one of the CLI switches.  Michal kindly pointed me in the right direction.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hey ivan,

    excellent! I think now everything is covered, right?

    So now we "just" need a suitable image set...  ;-)

    I'm just trying if I can get an interior of a small gothic choir covered in wall paintings to align with <500 images. Coverage would not be perfect (e.g. behind the altar) but certainly a nice impression.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    One thing just came to mind. I guess we would need to rely on the automatic reconstruction region, right? Since RC tends to vary the orientation quite a bit, that would not be identical in most cases, so would skew the results. To avoid this, wen should add some GCPs to the scene. I have no idea if the automatic box is then always the same or if it also varies. If it does, we need to import a custom reconstruction region.

    0
    Comment actions Permalink
  • Avatar
    ivan

    Good point Gotz, I also considered this however from my tests the region was always the same - maybe it isn't or it's slightly different.
    I expect there will always be slight variances between run to run, as with all benchmarks.   The best way to get accurate results is to run something many times then take the average.  I don't think that level of accuracy is needed, unless the region does indeed cause issues.


    There is no project file, except the one generated by the benchmark at the very end.  I could however set a fixed region via the code if needs be, so that is always constrained.

    However as it stands, it code is totally dataset indifferent,  which I think is best.  As that would enable us and end users to only swap the contents of the images folder if they wish to benchmark a different project.   Makes things best in the long run.

    I *may* be able to have different options.   That would be Alpha 0.2  not passed 0.1 yet :P

     

     

     

     

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Makes sence what you say. I guess with some GCPs included it should be all right since then the orientation will be identical for everyone and the auto selection should then be close enough...

    0
    Comment actions Permalink
  • Avatar
    ShadowTail

    Technically you should be able to generate a custom .rcproj file, though I don't recall if you can specify the reconstruction region or GCPs in there. Those might be saved as separate data files by RC.

    Also you need to keep in mind that the demo version of RC likely uses a slightly different format for the project files.

    0
    Comment actions Permalink
  • Avatar
    ivan

    Shadow - you are correct I could create a custom rcproj file for the project, global settings (and backup the existing ones), also have a specific file just for the region.  Avoiding GCP's is always best as from my experience this requirement for them just means there is something off with your images.   These things can be *easily* added/adjusted as we proceed. 

     

    What are peoples thoughts on a ID for the benchmark. - no identifiable system data is scraped.
    However some kind of identifier will be required as the results list will eventually get long, and some kind of identifier will be required to identify your results.  I can extract the username from capturing reality,   And maybe just have the first name plus the first letter of the last name.

    So for instance   Ivan Humperdinck  results in       Ivan H


     

    (not my last name :)  )

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Oh, I would have thought to creat an rcproj file with the images and GCPs already included. All people would need to do is to adjust the image path.

    If the file format is different with the demo, then there might be a problem. I guess that Michal would have pointed that out to ivan. There are many users out there without a CLI version...

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hey ivan,

    it seems like you want to preserve your anonymity!  :-)

    I think it might be better to use the HW as identifier for exactly that reason. Maybe with the first letter of the name or so. But in my case the first name would be almost a 100% give-away...

    The name is ok for internal purposes while testing but later I think it should be as neutral as possible when it goes public.

    GCPs are not there to patch up a model but to geo-reference (scale) it. It's an entirely standard procedure and it would make sure that the model is identical for everyone who runs the benchmark...

    0
    Comment actions Permalink
  • Avatar
    ivan

    Your 100% correct GCP's are for what you suggest, I was getting mixed up with CP's.  With all the scripting done over the last week..  I don't even see the code. All I see is blonde, brunette, redhead.

     

    Re:  a identifier I can ask it to prompt the user for whatever they wish to enter at the start, so that maybe better.

     

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    Hey ivan,

    so time for a break soon?  ;-)

    Yes, the custom name would be even better!

    Thumbs up once more!

    0
    Comment actions Permalink
  • Avatar
    ShadowTail

    A random GUID/UUID would probably be best for an identifier.

    I recall having a command line script somewhere that spits out random guids. I'll have to hunt it down.

    The beauty of it is that it requires no special programs.

    0
    Comment actions Permalink
  • Avatar
    ivan

    Random is not what I wanted.

    Say you upload your results, then at a later date you upload more from a different system or change hardware, and wish to compare them,  being able to locate those results in the database by scrolling down to 'F'  which will show all results by "FluffyBunny" is the idea.

    Not finalised the way selection will be working yet, however I did want to have a identifier that people could see so they could compare their own results with others if wanted. 

     

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    It's a great project as is, but maybe I misunderstood the scope. I thought there'd be some kind of analytics applied to the database to make the comparisons for us. If we manually study others system specs against our own and work to correlate performance values for a given operation to which hardware represents the winning horse, I'd think that could be a non-trivial exercise to tease out from the confluence of so many factors. No? I may be totally off base here, overthinking it, maybe it's more straightforward an exercise.

    0
    Comment actions Permalink
  • Avatar
    Götz Echtenacher

    I thought that is the second step once there is enough data to go on?

    0
    Comment actions Permalink
  • Avatar
    ivan

    The idea is that the data you upload, will be displayed to you (and then others), and you can choose to compare that dataset either against a previous run that you did, or other variables.

    I expect different people will wish to analyse the data in different ways.

    For me personally - and maybe selfishly, I wanted to compare initially against my own results, so I can make adjustments to my system and see how they affect each stage. - These will be presented to everyone, for better or worse.

    Seeing how things compare on other systems will be great too.

    Data analysis can be a complex matter in itself, yes we are talking another PHD :D.   Presenting it in a manner that is ideal for everyone won't likely be possible, however we can work at some pretty graphs etc.    The idea isn't to make it a race to show who is at the top/who has the fastest system- ultimately that data will be very valuable to see the system spec of how/why that was achieved.   

    However it may not be the case that 1 system is the fastest at all stages, so I'll let the user choose if they want the data ordered by date/user/fastest alignment/fastest depth map/fastest model creation/fastest texturing/gfx card/cpu/ etc.. to which they can make the comparison.  -   I suppose a few default calculations could be done to say "Your computer sucks, it is 22448% slower than the fastest shown here." etc :D  ... Then you can try and see why.  It will probably make me  full of buyer's remorse. :)

     

     

     

     

    0
    Comment actions Permalink
  • Avatar
    ivan

    Ok - Time for some testing.   Welcome to The Reality Capture Benchmark Alpha 0.4   

    This is not close to the final, however I need feedback on how/if the scripting works on others systems.
    The online database is work in progress, so I have omitted any code regarding that.

    1) Things you need to do, unzip to your desktop.  It will currently only run from there.

    2) Choose your images and place them inside the Desktop/CapturingRealityBenchmark/Images folder
    For now I'd suggest a smaller collection that you know work.  - None are currently included.

    3) Run the benchmark.bat file
    You will be asked to enter a identifier/nickname at the start.

    4) Sit back and relax

    5) Once the benchmark has run it's course you will be given the option to enter any additional notes.

    6) The results will be generated into a file called results.txt  It should look similar to this.


    Don't worry that the times are not labeled etc, that is all being dealt with when the data is parsed at the database.
    If your txt file looks different to this, please share. - especially if you have multi gpu or HDD's.

     

    Current Known Issues/Potential Issues
    1) If dataset is too small or computer is too fast completing a section <15s It may not record the time stamp for that section, fix - increase amount of photos.
    2) I Cannot Identify if more than 1 GPU is present (requires Cuda toolkit) or we must wait until my workstation arrives so I can test multi GPU.
    2.5) Same goes for Multi HDD's.
    3) I run windows 10, I am unsure if all the commands/scripts will work on earlier versions/VM's/Servers
    4) The code is english as are the commands, I do not know if they work with other local's
    5) Will likely only run with Demo & Full/CLI versions of the application.  So if you have the Promo, please try installing the demo.
    6) The script assumes you have installed the application in the default directory
    7) Admin privileges maybe required.
    8) Be wary of running software from unknown sources from the internet.  Both *.Bat files are in plain text.  You are free to inspect the code in notepad to ensure no shenanigans.  You can also check with www.virustotal.com
    9) The project will delete your cache and may change some application settings away from default. - Fear not a backup of your settings are saved first. as "GlbBackup.bak.rcconfig"
    10) You looked at the code, and question why it's such a mess, why I did it that way and why it took me so long, me too. - I'm no expert.
    11) If you have made a suggestion and I ignored or refuted it, sorry.  - If you think it is important, try a different way to convince me, I may not have understood.  This project is for the benefit for us all, my opinion is just one of many.  Everyones input and suggestions are valued :)


    Please if you have time, give it a go and report back,  if it does not work please explain what happened with as much info as you can if it does not behave as expected.
    Thankyou :)



    Edit:  
    Changelog - minor edit to code, to allow CapturingRealityBenchmark folder to be located anywhere, and not restricted to the desktop.  I found that testing on a mac/parrallels vm that virtual paths/directories did not work so well,  so ideally it should be located in a real location.

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    Well done, Ivan, much appreciate your efforts. My machine is chewing down a reconstruction presently, will run when RC is freed up. Don't we wish we could save any process mid-stream to resume at a later time.

     

    With gratitude,

    Benjy

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    Hello Ivan,

    I ran benchmark.bat and received the following error:

    Let me know. Thanks.

    Benjy

    0
    Comment actions Permalink
  • Avatar
    ivan

    Hi benjamin - Thankyou for taking the time to test and give feedback.

    May I ask what version of the application you tested with ? 

    Demo, Promo, CLI ?

     

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    Promo. I was walking out the door when I saw your email. I'll pick up with instructions when I get back in a couple hours.

    Best,

    Benjy

    0
    Comment actions Permalink
  • Avatar
    ivan

    Hi Benjamin 

    Unfortunately for the moment,  the version posted will not work with the promo due to its restrictions with CLI instructions, - one of the caveats of the more accessible price point. 


    In my work in progress code, it does check which license type the user has,  longer term having a less detailed bench running on the promo version maybe possible.

    Compromises are always a issue, and getting the most detailed & accurate data, took a higher priority.  

    I am currently unable to test the effect of installing the demo when the promo is installed. 

    0
    Comment actions Permalink
  • Avatar
    Benjamin von Cramon

    Of course, you and Götz went through that previously. I'll await word for further instructions. I've just ordered an upgrade to motherboard, RAM, and more SSD disk space for head room, would be nice to compare before and after.

    0
    Comment actions Permalink

Please sign in to leave a comment.