Comments

3 comments

  • Avatar
    Wishgranter
    Hi Green

    How much cameras do you have in this setup ? Is it fixed with constant camera count or a "turntable" setup ?

    The issues are related to alignment issues... That is why I need to know a bit more about your setup…
    0
    Comment actions Permalink
  • Avatar
    Jan F.
    Hi Green
    The artifacts I was getting with a two-camera setup might go in the same direction. I tried a lot of settings. But the artifacts persisted in the one or other way. My first thought was the same: This should be the camera alignment. But after trying all possible settings I think that the core of this artifact-effects may be in the way the depthmaps are fused? My personal assumption is (please CR correct me if I am wrong) that if you have a relative constant distance to the object everything works fine, but if you cover a greater depth in your project where you go from further to very close the artifacting starts to occur. My personal »naive« interpretation of this is on the one hand if you move closer, the depth of field gets narrower so more parts of the image are blurred which causes problems in a good depth information. I could get a little smoother result in »downscaling for depth-maps« the close-up shots by 2 or 4 as it would eat some of the blurriness by downscaling. But lost fine details as well. Same maybe if you shoot with 85mm, so you have to close the aperture quite a bit i assume to prevent a lot of DOF. On the other hand is maybe an issue something Jon mentioned in PTX + Photos. if depthmaps are saved with 8bit you might get visible colorbanding which could cause problems with gradientdepth at bigger distances?
    2016-04-22 10_50_37-IMG_6245_depth.png - FastPictureViewer 1.9.png

    My personal solution right now is to avoid too close-up shots or exclude the problem photos from meshing. There is an old paper from TU-Darmstadt covering these topics? http://www.gris.informatik.tu-darmstadt ... ap-fusion/.
    One additional feature could help here: define something like a weighting factor for images like its possible for images-laserscan. So »bad« images are less used for meshing?
    maybe I am completely wrong?
    Best
    Jan
    0
    Comment actions Permalink
  • Avatar
    mads madsen
    hello - i had the same kind of noise with a mixed lens/camera set-up with 7 nikons.
    the noise on the side of the head can be because you have no good coverage.

    but - skin is transparent - subsurface scattering - wax like - means that some of the colors that is captured and used for the reconstruction is not on the actual skin surface but slightly underneath .
    so if the cameras that are covering the side of the head - especially the temple and the sides of the forehead where we have very thin skin - when the camera angles are very steep angle - the colored pixel are not on the same spot because they are underneath the actual skin surface.

    i went out a bought some face paint and camouflage painted the head before capturing - and boom - top results of the temple and cheek area near the ear - because there where no sub surface skin colors coming through any more.

    you can test the subsurface scattering very good if you do a capture of a waxy soap the herbes - the herbs are visible - but all underneath the actual surface

    best regards
    0
    Comment actions Permalink

Please sign in to leave a comment.