apply external tranform

Comments

5 comments

  • Avatar
    Ondrej Trhan CR

    Hi Ivan,

    are your data georeferenced? Are you using GCPs or control points in your process? Did you see this tutorial: https://www.youtube.com/watch?v=m2lNwwtUgNM&t=759s?

    It is possible, you can find some info in this post: https://support.capturingreality.com/hc/en-us/community/posts/4410825805852?page=2#comments

    0
    Comment actions Permalink
  • Avatar
    i_malek

    Hi, thanks for your reply.

    I tried to follow that exact tutorial, the laser scan data is not georeferenced, but I imported it as such to keep it in place. When trying to merge the laser scanned and photogrammetry components together, they just seem to completely ignore each other, even though there should be around 95% overlap.

    When I tried to use control points, it took me half a day to define them and the final result was not great. I was hoping, RC would use them as a guide for rough placement and do some alignment on it's own, but it looked like it just used the control points and the resulting alignment was too imprecise to use. Therefore CloudCompare's alignment tools seem like the way to go, although it might mean to compute the meshes for all the separate parts and then computing a final mesh (since in my model, the photogrammetry is not meant for texture projection, but for adding parts, where it would be difficult to get the lidar positioned).

     

    Cheers,

    Ivan

    0
    Comment actions Permalink
  • Avatar
    Ondrej Trhan CR

    Are you using colored laser scan?

    With control points, did you placed minimally three around the dataset? Also, were these points placed on minimally three images from one component and 2-3 LSPs from laser scan's component?

    Be careful with the meshes, as photogrammetry is basically without scale and you wrote, that you don't have georeferencing here. Is this a big project? Is it possible to share it with us to check?

    0
    Comment actions Permalink
  • Avatar
    i_malek

    Hi Ondrej,

    the laser scan only has intensity data, no color data and yes, I placed 4 points par scan, so it did align, but the alignment was way off, so it wasn't useful and took too much time. In the turial you linked to, it seemed to align even without control points and without color on the laser scan.

    The data is too big to share, it's 151 laser scan positions and around 300 photos all in all. Around 60GB of stuff.

    Scale is not a problem, Cloud Compare does the scale as well (the laser scans should serve well as a reference), but I have no way as of yet to get the transform back into RC once CC outputs it.

    0
    Comment actions Permalink
  • Avatar
    Ondrej Trhan CR

    Were these points placed over more laser scans? Can you show me the print screen of 1Ds view with your control points? Also of your resulted alignment?

    Yes, if the overlap is good, then the laser scans and images are aligned without using control points. It also helps, when the data are georeferenced.

    You can export model from RealityCapture with transformation/rotation parameters, which you can obtain in CloudCompare. Did you see the post, which I send you in previous message?

    0
    Comment actions Permalink

Please sign in to leave a comment.