I am trying to use head-pose estimation (http://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/)
to automatically set the face of the makehuman avatar in the position of a given photo.
Now I do have a rotation and translation vector out of cv2.solvePnPRansac, obviously they are using cartesian coordinates.
- Code: Select all
(success, rotation_vector, translation_vector, inliers) = cv2.solvePnPRansac(model_points, image_points,
self.camera_matrix, self.dist_coeffs,
flags=cv2.SOLVEPNP_ITERATIVE)
My problem is now to apply those to the (orbital?) camera to see the avatar face as displayed in the original photo.
This what I want:
This is what I have now.
blue - 2d facial landmarks from the original photo
red - 2d facial landmarks from the screen capture of the mesh view
green - 3d facial landmarks projected on the view witch should be very close to the blue ones, but are not because the view is not aligned correctly....
Not sure I m very clear...any help appreciated.
Cheers.
-David