首页 文章

在MATLAB中使用Autoencoder进行函数逼近

提问于
浏览
2

我有一个简单的非线性函数y = x . ^ 2,其中x和y是n维向量,正方形是分量方形 . 我想用Matlab中的自动编码器用低维矢量逼近y . 问题是即使低维空间设置为n-1,我也会得到失真的重构y . 我的训练数据看起来像this,这是从低维空间重建的典型result . 我的Matlab代码如下 .

%% Training data
inputSize=100;
hiddenSize1 = 80;

epo=1000;
dataNum=6000;
rng(123);
y=rand(2,dataNum);
xTrain=zeros(inputSize,dataNum);
for i=1:dataNum
    xTrain(:,i)=linspace(y(1,i),y(2,i),inputSize).^2;
end

%scaling the data to [-1,1]
for i=1:inputSize
    meanX=0.5; %mean(xTrain(i,:));
    sd=max(xTrain(i,:))-min(xTrain(i,:));
    xTrain(i,:) = (xTrain(i,:)- meanX)./sd;
end

%% Training the first Autoencoder

% Create the network. 
autoenc1 = feedforwardnet(hiddenSize1);
autoenc1.trainFcn = 'trainscg';
autoenc1.trainParam.epochs = epo;

% Do not use process functions at the input or output
autoenc1.inputs{1}.processFcns = {};
autoenc1.outputs{2}.processFcns = {};

% Set the transfer function for both layers to the logistic sigmoid
autoenc1.layers{1}.transferFcn = 'tansig';
autoenc1.layers{2}.transferFcn = 'tansig';

% Use all of the data for training
autoenc1.divideFcn = 'dividetrain';
autoenc1.performFcn = 'mae';
%% Train the autoencoder
autoenc1 = train(autoenc1,xTrain,xTrain);
%%
% Create an empty network
autoEncoder = network;

% Set the number of inputs and layers
autoEncoder.numInputs = 1;
autoEncoder.numlayers = 1;

% Connect the 1st (and only) layer to the 1st input, and also connect the
% 1st layer to the output
autoEncoder.inputConnect(1,1) = 1;
autoEncoder.outputConnect = 1;

% Add a connection for a bias term to the first layer
autoEncoder.biasConnect = 1;

% Set the size of the input and the 1st layer
autoEncoder.inputs{1}.size = inputSize;
autoEncoder.layers{1}.size = hiddenSize1;

% Use the logistic sigmoid transfer function for the first layer
autoEncoder.layers{1}.transferFcn = 'tansig';

% Copy the weights and biases from the first layer of the trained
% autoencoder to this network
autoEncoder.IW{1,1} = autoenc1.IW{1,1};
autoEncoder.b{1,1} = autoenc1.b{1,1};


%%
% generate the features
feat1 = autoEncoder(xTrain);

%%
% Create an empty network
autoDecoder = network;

% Set the number of inputs and layers
autoDecoder.numInputs = 1;
autoDecoder.numlayers = 1;

% Connect the 1st (and only) layer to the 1st input, and also connect the
% 1st layer to the output
autoDecoder.inputConnect(1,1) = 1;
autoDecoder.outputConnect(1) = 1;

% Add a connection for a bias term to the first layer
autoDecoder.biasConnect(1) = 1;

% Set the size of the input and the 1st layer
autoDecoder.inputs{1}.size = hiddenSize1;
autoDecoder.layers{1}.size = inputSize;

% Use the logistic sigmoid transfer function for the first layer
autoDecoder.layers{1}.transferFcn = 'tansig';

% Copy the weights and biases from the first layer of the trained
% autoencoder to this network

autoDecoder.IW{1,1} = autoenc1.LW{2,1};
autoDecoder.b{1,1} = autoenc1.b{2,1};

%% Reconstruction
desired=xTrain(:,50);
input=feat1(:,50);
output = autoDecoder(input);

figure
plot(output)
hold on
plot(desired,'r')

1 回答

  • 0

    我真的使用单个自动编码器逼近非线性,因为它不会比纯线性PCA重建更优化(如果需要,我可以提供更精细的数学推理,尽管这不是math.stackexchange) . 您需要构建一个深层网络,通过几层线性变换来逼近非线性 . 然后,自动编码器是一个糟糕的模型可供选择(几乎没有人在今天的实践中使用它们),当你有自动编码器去噪时,往往通过尝试从其嘈杂的版本重建先验来学习更重要的表示 . 尝试构建一个深度去噪自动编码器 . This video介绍了去噪自动编码器的概念 . 该课程还有一个关于深度去噪自动编码器的视频 .

相关问题