Hi - hope someone can suggest a way for us to take the jitter out of models from sequential frames. We have a sequence of frames taken in a rig with 10 cameras which are zoomed onto our actor's face. The images are coupled with XMP camera definitions which have the prior set to "locked", so we believe the cameras to be in the same world location for each frame. However, the reconstructed mesh moves slightly between any two adjacent frames, so the result played back jitters very noticeably. In this sequence, there is nothing in the world the cameras can see which is itself fixed in space, but in a different sequence part of the rig itself is in shot and is of course static in world space, and the meshes we get seem to jitter less. So 1) does it fit with other people's experience, that having something fixed in the world helps to stabilise a mesh in space? and 2) where we can't have something fixed in shot for practical reasons, is there something else we can do to anchor the reconstructions to some fixed frame of reference?
Please sign in to leave a comment.