PyTorch PyTorch入門教程五

2021-08-14 12:57:04 字數 2702 閱讀 8591

還是直接看**是如何寫的。

從numpy中建立輸入與輸出。

import torch

import torch.nn as nn

import numpy as np

import matplotlib.pyplot as plt

from torch.autograd import variable

# hyper parameters

input_size = 1

output_size = 1

num_epochs = 60

learning_rate = 0.001

# toy dataset

x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],

[9.779], [6.182], [7.59], [2.167], [7.042],

[10.791], [5.313], [7.997], [3.1]], dtype=np.float32)

y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],

[3.366], [2.596], [2.53], [1.221], [2.827],

[3.465], [1.65], [2.904], [1.3]], dtype=np.float32)

構造線性回歸模型,只有乙個線性層

# linear regression model

class

linearregression

(nn.module):

def__init__

(self, input_size, output_size):

super(linearregression, self).__init__()

self.linear = nn.linear(input_size, output_size)

defforward

(self, x):

out = self.linear(x)

return out

model = linearregression(input_size, output_size)

構造loss和優化演算法。

# loss and optimizer

criterion = nn.mseloss()

optimizer = torch.optim

.sgd(model.parameters(), lr=learning_rate)

開始訓練。

# train the model 

for epoch in

range(num_epochs):

# convert numpy array

to torch variable

inputs = variable(torch.from_numpy(x_train))

targets = variable(torch.from_numpy(y_train))

# forward + backward + optimize

optimizer.zero_grad()

outputs = model(inputs)

loss = criterion(outputs, targets)

loss.backward()

optimizer.step()

if (epoch+1) % 5 == 0:

print ('epoch [%d/%d], loss: %.4f'

%(epoch+1, num_epochs, loss.data[0]))

其中,torch.from_numpy的原型是

torch.from_numpy(ndarray)
>>> a = numpy.array([1, 2, 3])

>>> t = torch.from_numpy(a)

>>> t

torch.longtensor([1, 2, 3])

>>> t[0] = -1

>>> a

array([-1, 2, 3])

畫圖,檢視結果。

# plot the graph

predicted = model(variable(torch.from_numpy(x_train))).data.numpy()

plt.plot(x_train, y_train, 'ro', label='original

data')

plt.plot(x_train, predicted, label='fitted line')

plt.legend()

plt.show()

最終結果大概是這樣,可以看出線性層擬合的就是一條直線。

邏輯回歸可以看成是分類問題,線性回歸可以看成是擬合或者**問題。

PyTorch PyTorch高階教程一

前面介紹了pytorch的一些基本用法,從這一節開始介紹pytorch在深度學習中的應用。在開始介紹之前,首先熟悉一下常用的概念和層。class torch.nn.module 舉例 import torch.nn as nn import torch.nn.functional as f clas...

PyTorch PyTorch高階教程三

前面介紹了使用pytorch構造cnn網路,這一節介紹點高階的東西lstm。以及我之前的一篇中文翻譯部落格 class torch.nn.lstm args,kwargs class rnn nn.module def init self,input size,hidden size,num lay...

Pytorch pytorch中的LSTM模型

pytorch中lstm的公式表示為 pytorch中lstm的定義如下 輸入資料格式 input seq len,batch,input size h0 num layers num directions,batch,hidden size c0 num layers num directions...