Gluon 實現 dropout 丟棄法

2022-04-01 06:39:22 字數 3950 閱讀 9272

多層感知機中:

hi 以 p 的概率被丟棄,以 1-p 的概率被拉伸,除以  1 - p

'''# 模型引數

num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784,10,256,256

w1 = nd.random.normal(scale=0.01,shape=(num_inputs,num_hiddens1))

b1 = nd.zeros(num_hiddens1)

w2 = nd.random.normal(scale=0.01,shape=(num_hiddens1,num_hiddens2))

b2 = nd.zeros(num_hiddens2)

w3 = nd.random.normal(scale=0.01,shape=(num_hiddens2,num_outputs))

b3 = nd.zeros(num_outputs)

params = [w1,b1,w2,b2,w3,b3]

for param in params:

param.attach_grad()

# 定義網路

'''#

讀取資料

#fashionmnist 28*28 轉為224*224

def load_data_fashion_mnist(batch_size, resize=none, root=os.path.join(

'~', '

.mxnet

', '

datasets

', '

fashion-mnist

')):

root = os.path.expanduser(root) #

展開使用者路徑 '~'。

transformer =

ifresize:

transformer +=[gdata.vision.transforms.resize(resize)]

transformer +=[gdata.vision.transforms.totensor()]

transformer =gdata.vision.transforms.compose(transformer)

mnist_train = gdata.vision.fashionmnist(root=root, train=true)

mnist_test = gdata.vision.fashionmnist(root=root, train=false)

num_workers = 0 if sys.platform.startswith('

win32

') else 4train_iter =gdata.dataloader(

mnist_train.transform_first(transformer), batch_size, shuffle=true,

num_workers=num_workers)

test_iter =gdata.dataloader(

mnist_test.transform_first(transformer), batch_size, shuffle=false,

num_workers=num_workers)

return

train_iter, test_iter

#定義網路

drop_prob1,drop_prob2 = 0.2,0.5

#gluon版

net =nn.sequential()

net.add(nn.dense(256,activation="

relu"),

nn.dropout(drop_prob1),

nn.dense(256,activation="

relu"),

nn.dropout(drop_prob2),

nn.dense(10)

)net.initialize(init.normal(sigma=0.01))

#訓練模型

defaccuracy(y_hat, y):

return (y_hat.argmax(axis=1) == y.astype('

float32

')).mean().asscalar()

defevaluate_accuracy(data_iter, net):

acc =0

for x, y in

data_iter:

acc +=accuracy(net(x), y)

return acc /len(data_iter)

deftrain(net, train_iter, test_iter, loss, num_epochs, batch_size,

params=none, lr=none, trainer=none):

for epoch in

range(num_epochs):

train_l_sum =0

train_acc_sum =0

for x, y in

train_iter:

with autograd.record():

y_hat =net(x)

l =loss(y_hat, y)

l.backward()

if trainer is

none:

gb.sgd(params, lr, batch_size)

else

: trainer.step(batch_size)

#下一節將用到。

dropout的實現方法

dropout詳解 dropout的實現方法 演算法實現概述 1 其實dropout很容易實現,原始碼只需要幾句話就可以搞定了,讓某個神經元以概率p,停止工作,其實就是讓它的啟用值以概率p變為0。比如我們某一層網路神經元的個數為1000個,其啟用值為x1,x2 x1000,我們dropout比率選擇...

pytorch實現Dropout與正則化防止過擬合

numpy實現dropout與l1,l2正則化請參考我另一篇部落格 pytorch使用dropout與l2 import torch import matplotlib.pyplot as plt torch.manual seed 1 sets the seed for generating ra...

WebRTC中丟包重傳機制的實現

當網路質量突然變的很差並開始丟包時,聲音聽起來音質會變差,畫面幀速會下降,甚至會完全卡住。我們可能需要某種機制來應對這種情況。在webrtc中,主要有兩種機制來應該網路變差的情況 前向糾錯 在每個資料報中,您將新增一些關於前乙個資訊的資訊,以防丟失,您需要重新構建它們 flexfec是webrtc ...