pytorch基本流程

2021-10-02 22:49:39 字數 4674 閱讀 1966

tensor格式轉換

主要是講numpy格式轉換為tensor格式

x_train, y_train, x_valid, y_valid = map( torch.tensor, (x_train, y_train, x_valid, y_valid) )

torch.nn.finction模組與torch.nn.module模組

torch.nn.functional中有很多功能,後續會常用的。那什麼時候使用nn.module,什麼時候使用nn.functional呢?一般情況下,如果模型有可學習的引數,最好用nn.module,其他情況nn.functional相對更簡單一些

#例如一些損失函式,啟用層等使用torch.function

import torch.nn.function as f

loss_fun = f.cross_entropy

from torch import nn

#在建立一些網路模組時直接使用torch.module,且只需要寫前向傳播,在使用module模組時已經提前初始化好變數。

class

minist_nn

(nn.module)

:def

__init__

(self)

:super()

.__init__(

) self.hidden1 = nn.linear(

784,

128)

self.hidden2 = nn.linear(

128,

256)

self.out = nn.linear(

256,10)

defforward

(self, x)

: x = f.relu(self.hidden1(x)

) x = f.relu(self.hidden2(x)

) x = self.out(x)

return x

#例項化後可進行列印

net = mnist_nn(

)print

(net)--

----

----

----

----

----

----

----

----

----

----

----

----

mnist_nn(

(hidden1)

: linear(in_features=

784, out_features=

128, bias=

true

)(hidden2)

: linear(in_features=

128, out_features=

256, bias=

true

)(out)

: linear(in_features=

256, out_features=

10, bias=

true

))

from torch.utils.data import tensordataset

from torch.utils.data import dataloader

train_ds = tensordataset(x_train, y_train)

train_dl = dataloader(train_ds, batch_size=bs, shuffle=

true

)valid_ds = tensordataset(x_valid, y_valid)

valid_dl = dataloader(valid_ds, batch_size=bs *2)

defget_data

(train_ds, valid_ds, bs)

:return

( dataloader(train_ds, batch_size=bs, shuffle=

true),

dataloader(valid_ds, batch_size=bs *2)

,)

#相關輔助函式

from torch import optim

defget_model()

: model = mnist_nn(

)#優化器函式中要傳入所優化的引數

return model, optim.sgd(model.parameters(

), lr=

0.001

)def

loss_batch

(model, loss_func, xb, yb, opt=

none):

loss = loss_func(model(xb)

, yb)

if opt is

notnone

: loss.backward(

) opt.step(

) opt.zero_grad()

return loss.item(),

len(xb)

import numpy as np

deffit

(steps, model, loss_func, opt, train_dl, valid_dl)

:for step in

range

(steps)

:#訓練的時候加上model.train在訓練過程中進行normalize相關操作

model.train(

)for xb, yb in train_dl:

loss_batch(model, loss_func, xb, yb, opt)

model.

eval()

with torch.no_grad():

losses, nums =

zip(

*[loss_batch(model, loss_func, xb, yb)

for xb, yb in valid_dl]

) val_loss = np.

sum(np.multiply(losses, nums)

)/ np.

sum(nums)

print

('當前step:'

+str

(step)

,'驗證集損失:'

+str

(val_loss)

)

train_dl, valid_dl = get_data(train_ds, valid_ds, bs)

model, opt = get_model(

)fit(

20, model, loss_func, opt, train_dl, valid_dl)--

----

----

----

----

----

----

----

----

----

----

----

----

----

----

----

---當前step:

0 驗證集損失:2.281271044921875

當前step:

1 驗證集損失:2.2509783180236815

當前step:

2 驗證集損失:2.203812783432007

當前step:

3 驗證集損失:2.1252328746795652

當前step:

4 驗證集損失:1.9954518688201903

當前step:

5 驗證集損失:1.7956134561538697

當前step:

6 驗證集損失:1.5333285322189332

當前step:

7 驗證集損失:1.261820195388794

當前step:

8 驗證集損失:1.0405126466751098

當前step:

9 驗證集損失:0.880806344127655

當前step:

10 驗證集損失:0.7669796929359436

當前step:

11 驗證集損失:0.6844435347557067

當前step:

12 驗證集損失:0.6225414663314819

當前step:

13 驗證集損失:0.5751254560470581

當前step:

14 驗證集損失:0.5374272463321685

當前step:

15 驗證集損失:0.5071435091495514

當前step:

16 驗證集損失:0.48235899238586427

當前step:

17 驗證集損失:0.46154185042381285

當前step:

18 驗證集損失:0.44456041851043704

當前step:

19 驗證集損失:0.42940848422050476

Grad Cam實現流程 pytorch

最近感覺類啟用圖視覺化是一件很有趣的事情。cam 傳送門 cam實現的流程 pytorch 由於對網路結構有定性要求,所以在視覺化一些有多個全連線層的網路時,表現不太友好,於是出現了grad cam。引用的博主 g5lorenzo 一句話 grad cam根據輸出向量,進行backward,求取特徵...

pytorch 基本運算

有過載運算子 和 函式兩種方法進行張量的運算 import torch import numpy as np a torch.rand 2,3 b torch.rand 3 這裡b 使用了 broatcasting 自動進行了維度擴充套件 print 運算子 與 add 方法運算結果一致 forma...

pytorch基本操作

coding utf 8 import torch import numpy as np 根據torch.tensor生成張量 print torch.tensor 1 print torch.tensor 2,3 print torch.tensor 2,3 根據torch.tensor生成張量 ...