Stacked Autoencoders學習筆記

2021-07-24 07:54:13 字數 4272 閱讀 5486

上圖是乙個棧式自編碼演算法模型,屬於無監督學習。棧式自編碼神經網路是乙個由多層自編碼器組成的神經網路,其前一層自編碼器的輸出作為其後一層自編碼器的輸入。通過將重構的x與輸入的x相減來計算誤差。

encoder部分從原始2000維特徵降至50維(根據實際情況,自定義特徵數),有三個隱層,每一層都是提取的高維特徵,最後一層作為降維後的特徵可用作分類和回歸。decoder部分輸出重構的x,通過調整encoder和decoder的引數,使得重構誤差最小。

含有乙個隱層的ae模型

%x是原始資料集,nxd。layers是一維陣列,存放每層降維的特徵數,按上述圖中,則是[1000 500 50]。lambda是l2規則項的係數(預設為0),

if ~exist('lambda', 'var') || isempty(lambda)

lambda = 0;

end% pretrain model using stacked denoising auto-encoders

no_layers = length(layers);%encoder的層數,上述中是3層

model = cell(2 * no_layers, 1);%初始化sae模型,6層

fori=1:no_layers %預訓練encoder部分的引數w和b

noise = 0.1;

max_iter = 30;

model.w = network.w;

model.bias_upw = network.bias_upw;

%將得到的network中的encoder的係數存入model

endfor

i=1:no_layers %將model中encoder的引數賦給decoder

model.w = model.

w';將encoder的w'賦給對應decoder的w

ifi ~= no_layers

model.bias_upw = model.bias_upw;將encoder的b賦給對應decoder的b

else

model.bias_upw = zeros(1, size(x, 2));%decoder最後一層的b初始化為0

endend

% compute mean squared error of initial model predictions

reconx = run_data_through_autoenc(model, x);

disp(['mse of initial model: ' num2str(mean((reconx(:) - x(:)) .^ 2))]);

% finetune model using gradient descent

noise = 0.1;

max_iter = 30;

model = backprop(model, x, x, max_iter, noise, lambda);

% compute mean squared error of final model predictions

disp(['mse of final model: ' num2str(size(x, 2) .* mean((reconx(:) - x(:)) .^ 2))]);

end

function

(x, layers, noise, max_iter)

%train_autoencoder trains an ****** autoencoder

if nargin < 2

error('not enough inputs.');

endifisempty(layers)

error('there should be at least one hidden layer.');%至少有乙個隱層

endif ~exist('noise', 'var') || isempty(noise)

noise = 0;

endif ~exist('max_iter', 'var') || isempty(max_iter)

max_iter = 50;

end% initialize the network

d = size(x, 2);%輸入特徵的維數

no_layers = length(layers) + 1;

network = cell(no_layers, 1);%初始化含有乙個隱層的ae模型

%初始化第一層係數w和b

network.w = randn(d, layers(1)) * .0001;

network.bias_upw = zeros(1, layers(1));

%初始化中間層係數w和b

fori=2:no_layers - 1

network.w = randn(layers(i - 1), layers(i)) * .0001;

network.bias_upw = zeros(1, layers(i));

end%初始化最後一層係數w和b

network.w = randn(layers(end), d) * .0001;

network.bias_upw = zeros(1, d);

% 計算重構誤差

reconx = run_data_through_autoenc(network, x);

disp(['initial mse of reconstructions: ' num2str(mean((x(:) - reconx(:)) .^ 2))]);

% perform backpropagation to minimize reconstruction error

network = backprop(network, x, x, max_iter, noise);

%得到更新係數後的network(包括encoder和decoder的係數),並返回network

% get representation from hidden layer

disp(['final mse of reconstructions: ' num2str(mean((x(:) - reconx(:)) .^ 2))]);

function

(network, x)

%run_data_through_autoenc intermediate representation and reconstruction

%% 將輸入x通過network(encoder和decoder)計算,得到重構x

% initialize some variables

n = size(x, 1);

no_layers = length(network);

middle_layer = ceil(no_layers / 2);%得到中間隱層數(encoder的最後一層)

% run data through autoencoder

activations = [x ones(n, 1)];

fori=1:no_layers

ifi ~= middle_layer && i ~= no_layers

%非中間層和最後一層,都用sigmoid函式,得到數值在0~1之間

activations = [1 ./ (1 + exp(-(activations * [network.w; network.bias_upw]))) ones(n, 1)];

else

%中間層和最後一層,得到的資料用在**和重構誤差,不需要在0~1之間

activations = [activations * [network.w; network.bias_upw]

ones(n, 1)];

ifi == middle_layer

endend

end reconx = activations(:,1:end-1);%最後一層是重構誤差x

其中,activations * [network.w; network.bias_upw] 就

C Primer Chapter One學習筆記

筆記 1.流 從io裝置上讀入或寫出的字串行,用來說明字元隨時間順序生成或消耗。2.輸入輸出符可連用原因 operator 或operator 返回stream物件。3.要測試程式那個語句出錯,使用cout 4.新建乙個內建型別,如int i 0 最好先初始化,不然用到的時候沒初始化會產生奇怪的錯誤...

BroadcastReceiver學習筆記

需要注意 的是,不要在 onreceive 方法中新增過多的邏輯或者進行任何的耗時操作,因為在廣播接收 器中是不允許開啟執行緒的,當 onreceive 方法執行了較長時間而沒有結束時,程式就會報錯。有序broadcast,sendorderedbroadcast intent,null abort...

CDISC SDTM SE domain 學習筆記

整理翻譯自 sdtm ig 3.2 www.cdisc.org sdtm se subject elements 鞏固每個subject的epochs和elements的起止時間點.se對於有多個 時期的試驗有著重要的用處 如crossover試驗 se包含乙個subject從乙個element進入...