5 2 高階內容 GPU 加速運算

2021-10-04 22:49:24 字數 4147 閱讀 6986

目錄

1.寫在前面

2.用 gpu 訓練 cnn

3.完整**演示

4.轉移至 cpu

在 gpu 訓練可以大幅提公升運算速度. 而且 torch 也有一套很好的 gpu 運算體系. 但是要強調的是:

這份 gpu 的**是依據之前cnn的**修改的. 大概修改的地方包括將資料的形式變成 gpu 能讀的形式, 然後將 cnn 也變成 gpu 能讀的形式. 做法就是在後面加上.cuda(), 很簡單.

...

test_data = torchvision.datasets.mnist(root='./mnist/', train=false)

# !!!!!!!! 修改 test data 形式 !!!!!!!!! #

test_x = torch.unsqueeze(test_data.test_data, dim=1).type(torch.floattensor)[:2000].cuda()/255. # tensor on gpu

test_y = test_data.test_labels[:2000].cuda()

再來把我們的 cnn 引數也變成 gpu 相容形式.

class cnn(nn.module):

...cnn = cnn()

# !!!!!!!! 轉換 cnn 去 cuda !!!!!!!!! #

cnn.cuda() # moves all model parameters and buffers to the gpu.

然後就是在 train 的時候, 將每次的training data 變成 gpu 形式. +.cuda()

for epoch ..:

for step, ...:

# !!!!!!!! 這裡有修改 !!!!!!!!! #

b_x = x.cuda() # tensor on gpu

b_y = y.cuda() # tensor on gpu

...if step % 50 == 0:

test_output = cnn(test_x)

# !!!!!!!! 這裡有修改 !!!!!!!!! #

pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # 將操作放去 gpu

accuracy = torch.sum(pred_y == test_y) / test_y.size(0)

...test_output = cnn(test_x[:10])

# !!!!!!!! 這裡有修改 !!!!!!!!! #

pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # 將操作放去 gpu

...print(test_y[:10], 'real number')

import torch

import torch.nn as nn

import torch.utils.data as data

import torchvision

# torch.manual_seed(1)

epoch = 1

batch_size = 50

lr = 0.001

download_mnist = false

train_data = torchvision.datasets.mnist(root='./mnist/', train=true, transform=torchvision.transforms.totensor(), download=download_mnist,)

train_loader = data.dataloader(dataset=train_data, batch_size=batch_size, shuffle=true)

test_data = torchvision.datasets.mnist(root='./mnist/', train=false)

# !!!!!!!! change in here !!!!!!!!! #

test_x = torch.unsqueeze(test_data.test_data, dim=1).type(torch.floattensor)[:2000].cuda()/255. # tensor on gpu

test_y = test_data.test_labels[:2000].cuda()

class cnn(nn.module):

def __init__(self):

super(cnn, self).__init__()

self.conv1 = nn.sequential(nn.conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,),

nn.relu(), nn.maxpool2d(kernel_size=2),)

self.conv2 = nn.sequential(nn.conv2d(16, 32, 5, 1, 2), nn.relu(), nn.maxpool2d(2),)

self.out = nn.linear(32 * 7 * 7, 10)

def forward(self, x):

x = self.conv1(x)

x = self.conv2(x)

x = x.view(x.size(0), -1)

output = self.out(x)

return output

cnn = cnn()

# !!!!!!!! change in here !!!!!!!!! #

cnn.cuda() # moves all model parameters and buffers to the gpu.

optimizer = torch.optim.adam(cnn.parameters(), lr=lr)

loss_func = nn.crossentropyloss()

for epoch in range(epoch):

for step, (x, y) in enumerate(train_loader):

# !!!!!!!! change in here !!!!!!!!! #

b_x = x.cuda() # tensor on gpu

b_y = y.cuda() # tensor on gpu

output = cnn(b_x)

loss = loss_func(output, b_y)

optimizer.zero_grad()

loss.backward()

optimizer.step()

if step % 50 == 0:

test_output = cnn(test_x)

# !!!!!!!! change in here !!!!!!!!! #

pred_y = torch.max(test_output, 1)[1].cuda().data # move the computation in gpu

accuracy = torch.sum(pred_y == test_y).type(torch.floattensor) / test_y.size(0)

print('epoch: ', epoch, '| train loss: %.4f' % loss.data.cpu().numpy(), '| test accuracy: %.2f' % accuracy)

test_output = cnn(test_x[:10])

# !!!!!!!! change in here !!!!!!!!! #

pred_y = torch.max(test_output, 1)[1].cuda().data # move the computation in gpu

print(pred_y, 'prediction number')

print(test_y[:10], 'real number')

cpu_data = gpu_data.cpu()

5 2修改母版頁內容

修改母版頁內容 1 使用title屬性 page language c masterpagefile master.master title content page file 2 使用page header屬性 如果需要通過程式設計方式修改母版頁的頁標題或css規則,可以使用page.header...

Android TextView內容過長加省略號

在 android textview 中有個內容過長加省略號的屬性,即 ellipsize 用法如下 在xml中 android ellipsize end 省略號在結尾 android ellipsize start 省略號在開頭 android ellipsize middle 省略號在中間 a...

2018 08 09 高階內容

from tensorflow.examples.tutorials.mnist import input data mnist input data.read data sets mnist data one hot true mnist input data.read data sets one...