首页 文章

稀疏三维重建MATLAB实例

提问于
浏览
1

我有一个立体相机系统,我正在尝试这个MATLAB的计算机视觉工具箱示例(http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html)与我自己的图像和相机校准文件 . 我使用了加州理工学院的相机校准工具箱(http://www.vision.caltech.edu/bouguetj/calib_doc/) .

首先,我根据第一个例子分别尝试了每个摄像机,并找到了每个摄像机的固有摄像机校准矩阵并保存了它们 . 我还使用Caltech工具箱对左右图像进行了无失真处理 . 因此,我从MATLAB示例中注释掉了代码 .

这是内在的相机矩阵:

K1 = [1050 0 630; 0 1048 460; 0 0 1];

K2 = [1048 0 662; 0 1047 468; 0 0 1];

顺便说一句,这些是大黄蜂XB3相机的正确和中心镜头 .

问题:他们不应该是一样的吗?

然后我根据第五个例子做了立体声校准 . 我保存了旋转矩阵(R)和平移矩阵(T) . 因此,我从MATLAB示例中注释掉了代码 .

这是旋转和平移矩阵:

R = [0.9999 -0.0080 -0.0086; 0.0080 1 0.0048; 0.0086 -0.0049 1];

T = [120.14 0.55 1.04];

然后我将所有这些图像和校准文件以及相机矩阵输入到MATLAB示例中并尝试找到三维点 Cloud ,但结果并不乐观 . 我在这里附上代码 . 我认为这里有两个问题:

1-我的极线约束值太大了!(到16的幂)

2-我不确定相机矩阵以及我如何从加州理工工具箱的R和T计算它们!

附:就特征提取而言,这很好 .

如果有人能提供帮助,那就太好了 .

clear
close all
clc

files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')

% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;

%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;

% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
    'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
    extractFeatures(rgb2gray(images(i).image),images(i).points);
end

% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));

indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;

% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage'    );
title('Original Matched Features from Globe01 and Globe02');

% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
    *helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
    *[matchedPoints2.Location(k,:),1]';

%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%

% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end

validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');

% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3);    % store color information

% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
    pointInImage1(1)*P1(3,:) - P1(1,:);...
    pointInImage1(2)*P1(3,:) - P1(2,:);...
    pointInImage2(1)*P2(3,:) - P2(1,:);...
    pointInImage2(2)*P2(3,:) - P2(2,:)];

% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);

X = V(:,end);
X = X/X(end);

% Store location
points3D(i,:) = X';

% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end

% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];

% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')

% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');

1 回答

  • 0

    首先,计算机视觉系统工具箱现在包括用于校准单个摄像机的Camera Calibrator App,以及support for programmatic stereo camera calibration . 您可以更轻松地使用这些工具,因为您使用的示例和Caltech Calibration Toolbox使用了一些不同的约定 .

    该示例使用预乘法约定,即行向量矩阵,而Caltech工具箱使用后乘法约定(矩阵列向量) . 这意味着如果您使用Caltech的相机参数,则必须转置内部矩阵和旋转矩阵 . 这可能是您遇到问题的主要原因 .

    至于两个相机之间的内在因素不同,这是完全正常的 . 所有相机都略有不同 .

    它还有助于查看您用于三角测量的匹配功能 . 鉴于你正在重建一个细长的物体,看到重建的点在3D中形成一条线并不太令人惊讶......

    您也可以尝试修正图像并进行密集重建,如example I've linked to above .

相关问题