使用pytorch構建神經網路

2021-10-02 11:25:30 字數 3846 閱讀 8316

介紹:從學習神經網路到現在時間也不短了,由於個人數學能力有限,用numpy構建神經網路,實屬力不從心,但還是將神經網路的基本步驟理清了,然後開始學習用pytorch搭建神經網路。以下記錄構建神經網路的簡單方法。

import numpy as np

import torch

n, d_in, h, d_out =64,

1000

,100,10

# 隨機建立一些訓練資料

x = torch.randn(n, d_in)

y = torch.randn(n, d_out)

w1 = torch.randn(d_in, h)

#設定輸入層權重

w2 = torch.randn(h, d_out)

#設定隱層層權重

learning_rate =1e-

6#學習率

for it in

range

(500):

# 前傳

h = x.mm(w1)

# n*h

h_relu = h.clamp(

min=0)

# n*h

y_pred = h_relu.mm(w2)

# n*d_out

# compute loss

loss =

(y_pred - y)

.pow(2

).sum(

).item(

)print

(it, loss)

# backward pass

# compute the gradient 這裡沒有bias

grad_y_pred =

2.0*

(y_pred - y)

grad_w2 = h_relu.t(

).mm(grad_y_pred)

grad_h_relu = grad_y_pred.mm(w2.t())

grad_h = grad_h_relu.clone(

) grad_h[h <0]

=0grad_w1 = x.t(

).mm(grad_h)

# update weights of w1 and w2

w1 -= learning_rate * grad_w1

w2 -= learning_rate * grad_w2

(1)
import numpy as np

import torch

import torch.nn as nn

n, d_in, h, d_out =64,

1000

,100,10

# 隨機建立一些訓練資料

x = torch.randn(n, d_in)

y = torch.randn(n, d_out)

model = torch.nn.sequential(

torch.nn.linear(d_in, h, bias=

false),

# w1*x+b1

torch.nn.relu(),

torch.nn.linear(h, d_out, bias=

false),

#)# loss function

loss_fn = nn.mseloss(reduction=

'sum'

)learning_rate =1e-

4# 學習率

optimizer = torch.optim.adam(model.parameters(

), lr=learning_rate)

# 優化引數,更新引數

for it in

range

(500):

# forward pass

y_pred = model(x)

# compute loss

loss = loss_fn(y_pred, y)

# computation graph

print

(it, loss.item())

optimizer.zero_grad(

)# 求導之前,清空梯度

# backward pass

loss.backward(

)# update model parameters

optimizer.step(

)# 求導之後,更新param

(2)
import numpy as np

import torch

import torch.nn as nn

n, d_in, h, d_out =64,

1000

,100,10

# 隨機建立一些訓練資料

x = torch.randn(n, d_in)

y = torch.randn(n, d_out)

class

twolayernet

(torch.nn.module)

:# 繼承類

def__init__

(self, d_in, h, d_out)

:super

(twolayernet, self)

.__init__(

)# define the model architecture

self.linear1 = torch.nn.linear(d_in, h, bias=

false

)# bias為y=ax+b中的b

self.linear2 = torch.nn.linear(h, d_out, bias=

false

)def

forward

(self, x)

: y_pred = self.linear2(self.linear1(x)

.clamp(

min=0)

)return y_pred

model = twolayernet(d_in, h, d_out)

'''model = torch.nn.sequential(

torch.nn.linear(d_in,h,bias=false), #w1*x+b1

torch.nn.relu(),

torch.nn.linear(h,d_out,bias=false),

)'''loss_fn = nn.mseloss(reduction=

'sum'

)# loss function

learning_rate =1e-

4# 學習率

optimizer = torch.optim.adam(model.parameters(

), lr=learning_rate)

# 優化引數,更新引數

for it in

range

(500):

# 前傳

y_pred = model(x)

# model.forward()

# compute loss

loss = loss_fn(y_pred, y)

# computation graph

print

(it, loss.item())

optimizer.zero_grad(

)# 求導之前,清空梯度

# backward pass

loss.backward(

)# compute the gradient

# update model parameters

optimizer.step(

)# 求導之後,更新param

用pytorch構建神經網路

在神經網路模型中之前,要對資料進行一系列的預處理,如果是型別變數,可使用one hot編碼 對於數值型別,可進行標準化,讓其屬性值在0左右波動 import torch import numpy as np import pandas as pd from torch.autograd import...

pytorch學習 構建卷積神經網路

本文是對隧道 https org tutorials beginner blitz neural networks tutorial.html sphx glr beginner blitz neural networks tutorial py 的總結。其中 部分按照自己的習慣有所變動。構建神經網...

利用pytorch構建簡單神經網路

省略了資料集的處理過程 轉為tensor x torch.tensor input features,dtype float y torch.tensor labels,dtype float 權重引數初始化,設計網路結構 輸入348 14 weights torch.randn 14 128 dt...