我试图从使用OpenCV with C的相同相机拍摄的2张图像中找到3D模型 . 我按照this方法 . 我仍然无法纠正R和T计算中的错误 .

图1:删除背景以消除不匹配

img

图2:仅在X方向上转换图像1背景已移除以消除不匹配

img

我使用MATLAB Toolbox找到了内在相机矩阵(K) . 我发现它是:

K=

[3058.8 0 -500

0 3057.3 488

0 0 1]

所有图像匹配关键点(使用SIFT和BruteForce匹配,消除不匹配)都与图像中心对齐,如下所示:

obj_points.push_back(Point2f(keypoints1[symMatches[i].queryIdx].pt.x - image1.cols / 2, -1 * (keypoints1[symMatches[i].queryIdx].pt.y - image1.rows / 2)));
scene_points.push_back(Point2f(keypoints2[symMatches[i].trainIdx].pt.x - image1.cols / 2, -1 * (keypoints2[symMatches[i].trainIdx].pt.y - image1.rows / 2)));

从Point Correspondeces,我在OpenCV中找到了使用RANSAC的基本矩阵

Fundamental Matrix:

[0 0 -0.0014

0 0 0.0028

0.00149 -0.00572 1]

Essential Matrix 获得使用:

E = (camera_Intrinsic.t())*f*camera_Intrinsic;

E obtained:

[0.0094 36.290 1.507

-37.2245 -0.6073 14.71

-1.3578 -23.545 -0.442]

SVD of E:

E.convertTo(E, CV_32F);
Mat W = (Mat_<float>(3, 3) << 0, -1, 0, 1, 0, 0, 0, 0, 1);
Mat Z = (Mat_<float>(3, 3) << 0, 1, 0, -1, 0, 0, 0, 0, 0);

SVD decomp = SVD(E);
Mat U = decomp.u;
Mat Lambda = decomp.w;
Mat Vt = decomp.vt;

New Essential Matrix for epipolar constraint:

Mat diag = (Mat_<float>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 0);
Mat new_E = U*diag*Vt;

SVD new_decomp = SVD(new_E);

Mat new_U = new_decomp.u;
Mat new_Lambda = new_decomp.w;
Mat new_Vt = new_decomp.vt;

Rotation from SVD:

Mat R1 = new_U*W*new_Vt;
Mat R2 = new_U*W.t()*new_Vt;

Translation from SVD:

Mat T1 = (Mat_<float>(3, 1) << new_U.at<float>(0, 2), new_U.at<float>(1, 2), new_U.at<float>(2, 2));
Mat T2 = -1 * T1;

我得到的R矩阵是:

R1:

[-0.58 -0.042 0.813
-0.020 -0.9975 -0.066
0.81 -0.054 0.578]

R2:

[0.98 0.0002 0.81
-0.02 -0.99 -0.066
0.81 -0.054 0.57]

Translation Matrices:

T1:

[0.543 -0.030 0.838]

T2:

[-0.543 0.03 -0.83]

请在有错误的地方澄清 .

这4组P2矩阵R | T与P1 = [I]给出了不正确的三角模型 .

此外,我认为获得的T矩阵是不正确的,因为它应该只是x移位而没有z移位 .

当尝试使用相同的image1 = image2 - >时,我得到T = [0,0,1] . Tz = 1是什么意思? (因为两个图像都相同,所以没有z移位)

我应该将关键点坐标与图像中心对齐,还是从校准中获得原理焦点?