PyTorch PyTorch高階教程一

2021-08-14 13:17:58 字數 4751 閱讀 2954

前面介紹了pytorch的一些基本用法,從這一節開始介紹pytorch在深度學習中的應用。在開始介紹之前,首先熟悉一下常用的概念和層。

class torch.nn.module

舉例:

import torch.nn as nn

import torch.nn.functional as f

class

model

(nn.module):

def__init__

(self):

super(model, self).__init__()

self.conv1 = nn.conv2d(1, 20, 5)

self.conv2 = nn.conv2d(20, 20, 5)

defforward

(self, x):

x = f.relu(self.conv1(x))

return f.relu(self.conv2(x))

在前面的線性回歸和邏輯回歸中同樣用到了此模組,用法也是類似。

class torch.nn.sequential(*args)

# example of using sequential

model = nn.sequential(

nn.conv2d(1,20,5),

nn.relu(),

nn.conv2d(20,64,5),

nn.relu()

)

2d convolution

class torch.nn.conv2d

(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=true)

其中

其餘幾個引數跟caffe一樣。

2d normalization

class torch.nn.batchnorm2d

(num_features, eps=1e-05, momentum=0.1, affine=true)

其中

batchnorm2d計算的是每個通道上的歸一化特徵,公式為

2d pooling

class torch.nn.maxpool2d

(kernel_size, stride=none, padding=0, dilation=1, return_indices=false, ceil_mode=false)

其中

其餘引數和caffe一樣。

pooling之後的特徵圖大小計算方式為

接下來就是見證奇蹟的時刻,讓我們來看看乙個簡單的卷積神經網路是如何構建的。

首先和前面幾節一樣載入資料

import torch 

import torch.nn as nn

import torchvision.datasets as dsets

import torchvision.transforms as transforms

from torch.autograd import variable

# hyper parameters

num_epochs = 5

batch_size = 100

learning_rate = 0.001

# mnist dataset

train_dataset = dsets.mnist(root='./data/',

train=true,

transform=transforms.totensor(),

download=true)

test_dataset = dsets.mnist(root='./data/',

train=false,

transform=transforms.totensor())

# data loader (input pipeline)

train_loader = torch.utils.data.dataloader

(dataset=train_dataset,

batch_size=batch_size,

shuffle=true)

test_loader = torch.utils.data.dataloader

(dataset=test_dataset,

batch_size=batch_size,

shuffle=false)

構建卷積神經網路,兩個卷積層,乙個線性層。

# cnn model (2 conv layer)

class cnn(nn.module):

def __init__(self):

super(cnn, self).__init__()

self.layer1 = nn.sequential(

nn.conv2d(1, 16, kernel_size=5, padding=2),

nn.batchnorm2d(16),

nn.relu(),

nn.maxpool2d(2))

self.layer2 = nn.sequential(

nn.conv2d(16, 32, kernel_size=5, padding=2),

nn.batchnorm2d(32),

nn.relu(),

nn.maxpool2d(2))

self.fc = nn.linear(7*7*32, 10)

def forward(self, x):

out = self.layer1(x)

out = self.layer2(out)

out = out

.view(out

.size(0), -1)

out = self.fc(out)

return out

cnn = cnn()

定義loss和優化演算法。

# loss and optimizer

criterion = nn.crossentropyloss()

optimizer = torch.optim

.adam(cnn.parameters(), lr=learning_rate)

開始訓練。

# train the model

for epoch in range(num_epochs):

for i, (images, labels) in enumerate(train_loader):

images = variable(images)

labels = variable(labels)

# forward + backward + optimize

optimizer.zero_grad()

outputs = cnn(images)

loss = criterion(outputs, labels)

loss.backward()

optimizer.step()

if (i+1) % 100 == 0:

print ('epoch [%d/%d], iter [%d/%d] loss: %.4f'

%(epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.data[0]))

測試。

# test the model

cnn.eval() # change model to 'eval' mode (bn uses moving mean/var).

correct = 0

total = 0

for images, labels in test_loader:

images = variable(images)

outputs = cnn(images)

_, predicted = torch.max(outputs.data, 1)

total += labels.size(0)

correct += (predicted == labels).sum()

print('test accuracy of

the model on

the10000 test images: %d %%' % (100 * correct / total))

最終結果為正確率99%,訓練耗時7分鐘左右。按照前一節的做法改為gpu版本後,正確率為99%,訓練耗時34秒。由於載入資料只有乙個worker,將dataloader的num_workers設為4後,訓練耗時17秒。

相較於上一節的兩個線性層,本節的兩個卷積層加上乙個線性層的做法在準確率上有所提公升,並且參數量更少。上一節隱層的參數量為784x500=392000,這一節卷積層的參數量為16x1x5x5+32x16x5x5=13200。

PyTorch PyTorch高階教程三

前面介紹了使用pytorch構造cnn網路,這一節介紹點高階的東西lstm。以及我之前的一篇中文翻譯部落格 class torch.nn.lstm args,kwargs class rnn nn.module def init self,input size,hidden size,num lay...

Pytorch pytorch中的LSTM模型

pytorch中lstm的公式表示為 pytorch中lstm的定義如下 輸入資料格式 input seq len,batch,input size h0 num layers num directions,batch,hidden size c0 num layers num directions...

PyTorch PyTorch入門教程五

還是直接看 是如何寫的。從numpy中建立輸入與輸出。import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt from torch.autograd import variable h...