神經網路學習 PyTorch學習03 搭建模型

2022-08-26 11:51:06 字數 2904 閱讀 5927

torch.nn

(1)用於搭建網路結構的序列容器:torch.nn.sequential 

models =torch.nn.sequential(

torch.nn.linear(input_data, hidden_layer),

torch.nn.relu(),

torch.nn.linear(hidden_layer, output_data)

)from collections import ordereddict #

使用有序字典 使模組有自定義的名次

models2 =torch.nn.sequential(ordereddict([

("line1

",torch.nn.linear(input_data, hidden_layer)),

("relu1

",torch.nn.relu()),

("line2

",torch.nn.linear(hidden_layer, output_data))])

)

(2)線性層:torch.nn.linear

(3)啟用函式:torch.nn.relu

(4)損失函式:torch.nn.mseloss(均方誤差函式),troch.nn.l1loss(平均絕對誤差函式),torch.nn.crossentropyloss(交叉熵)

import

torch

from torch.autograd import

variable

batch_n = 100hidden_layer = 100input_data = 1000output_data = 10x = variable(torch.randn(batch_n, input_data), requires_grad=false) #

x封裝為節點,設定為不自動求導

y = variable(torch.randn(batch_n, output_data), requires_grad=false)

models =torch.nn.sequential(

torch.nn.linear(input_data, hidden_layer),

torch.nn.relu(),

torch.nn.linear(hidden_layer, output_data))#

from collections import ordereddict # 使用有序字典 使模組有自定義的名次

#models2 = torch.nn.sequential(ordereddict([

#("line1",torch.nn.linear(input_data, hidden_layer)),

#("relu1",torch.nn.relu()),

#("line2",torch.nn.linear(hidden_layer, output_data))])#)

epoch_n = 10000learning_rate = 0.0001loss_fn =torch.nn.mseloss()

for epoch in

range(epoch_n):

y_pred =models(x)

loss =loss_fn(y_pred,y)

if epoch%1000 ==0:

print("

epoch:{},loss:

".format(epoch,loss.data[0]))

models.zero_grad()

#梯度歸零

loss.backward()

for param in models.parameters(): #

遍歷節點引數更新

param.data -= param.grad.data*learning_rate

torch.optim包

引數自動優化類:sgd,adagrad,rmsprop,adam

import

torch

from torch.autograd import

variable

batch_n = 100hidden_layer = 100input_data = 1000output_data =10x = variable(torch.randn(batch_n, input_data), requires_grad=false)

y = variable(torch.randn(batch_n, output_data), requires_grad=false)

models =torch.nn.sequential(

torch.nn.linear(input_data,hidden_layer),

torch.nn.relu(),

torch.nn.linear(hidden_layer,output_data)

)epoch_n = 20learning_rate = 0.0001loss_fn =torch.nn.mseloss()

optimzer = torch.optim.adam(models.parameters(), lr=learning_rate) #

torch.optim.adam對梯度更新使用到的學習率進行自適應調節

for epoch in

range(epoch_n):

y_pred =models(x)

loss =loss_fn(y_pred,y)

print("

eproch:{},loss:

".format(epoch,loss.data[0]))

optimzer.zero_grad()

#引數梯度歸零

loss.backward()

optimzer.step()

#節點引數更新

PyTorch學習之神經網路

神經網路可以通過torch.nn包來構建,它是基於自動梯度來構建一些模型,乙個nn.module包括層和乙個方法forward input 同時也會返回乙個輸出 output 下圖是乙個簡單的前饋神經網路lenet,乙個簡單的神經網路包括一下幾點 super net,self 首先找到 net 的父...

pytorch入門學習(三) 神經網路

神經網路可以使用torch.nn包構建.autograd實現了反向傳播功能,但是直接用來寫深度學習的 在很多情況下還是稍顯複雜,torch.nn是專門為神經網路設計的模組化介面.nn構建於autograd之上,可用來定義和執行神經網路.nn.module是nn中最重要的類,可把它看成是乙個網路的封裝...

pytorch學習 構建卷積神經網路

本文是對隧道 https org tutorials beginner blitz neural networks tutorial.html sphx glr beginner blitz neural networks tutorial py 的總結。其中 部分按照自己的習慣有所變動。構建神經網...