Workflow w/ BLK2Go scandata

Comments

4 comments

  • Avatar
    Ondrej Trhan CR

    Hi Benjy,
    as this is handhold scanner it is not supported by RealityCapture. It only supports terrestrial laser scans. E57 could also be ordered and it is supported by RealityCapture.

    Ordered means it has a 360° scanner position baked into the point cloud data per scan position (when you are moving with a backpack or using a car there are no actual scan positions, just georeferenced point cloud data).

    Ideally, RealityCapture works with scanner data exported per scan (per 360 terrestrial scan position). 

    In most cases, 3D laser scanners on cars or backpack laser scanners are producing unordered data, because no 360 laser scan position is really calculated. RealityCapture only natively supports ordered data from terrestrial static laser scanners.

    We do not support any data from devices like BLK2Go, which are producing un-ordered scan data (each 2d camera position is not recording during the scanning session or we do not have data from that session after export). That is why we have created this workaround with fake 360° scan positions: https://support.capturingreality.com/hc/en-us/articles/360020653080-How-to-import-any-scan-data-to-RealityCapture-using-CloudCompare-and-Faro-Scene-How-to-create-ordered-point-cloud-from-unordered-.

    Basically, you are trying to layout scans in the Faro Scene as you would scan places with the Faro S130 on a static tripod. The only difference is that now you are trying to layout these scan positions virtually in Faro Scene.

    The reason why RealityCapture requires knowledge about camera position is that it´s a photogrammetry software (we take laser scan data as camera positions, not as point clouds).

    I hope this has thrown more light on this topic. 

    0
    Comment actions Permalink
  • Avatar
    BenjvC

    Yes, Ondrej, thanks for shedding light. In the meanwhile I had picked that up about SLAM LiDAR being unordered, makes good sense now what allows RC to take in laser as a camera position. I'll follow the workaround, am hopeful. I had tried exporting the camera-based model from RC with .info file, brought it and a .las from BLK2Go into CloudCompare, was able to align the latter to the former, computed normals, ran Poisson Reconstruction, exported fbx and later obj, renamed .info file to math, but I'm stumped. Both meshes imported into RC with what appears to be the original transform coming in from BLK2Go, why isn't the new transform after aligning to the RC mesh being recorded and written into the export? Also, the normals got flipped. I went back and flipped in CloudCompare, but that's also evidently not recorded into the export. Am I missing a step?

    I'll follow your link, possibly a better approach anyway. Much appreciate your help.

    0
    Comment actions Permalink
  • Avatar
    Ondrej Trhan CR

    What you described (using mesh from camera and handheld scanner), this could be a good tutorial for you: https://support.capturingreality.com/hc/en-us/articles/4408415049618-Using-a-model-made-by-a-handheld-scanner-in-RealityCapture. Also this could help: https://www.youtube.com/watch?v=60wLmup4y00

     

    0
    Comment actions Permalink
  • Avatar
    BenjvC

    Okay then, see my thinking was correct, just something missing in execution. My laser mesh did align to my photo mesh, not clear why the transform was lost on saving out to fbx or obj, also why RC reported no info file found when the file names were the same and right next to one another in the directory. In any event, I'll follow the workflow in the tuts and hopefully pick up something that spells the difference. Many thanks, you're always so helpful.

    0
    Comment actions Permalink

Please sign in to leave a comment.