Pytorch學習筆記(五)

2021-08-01 04:37:39 字數 3533 閱讀 5193

9)在pytorch中使用lstm

學習pytorch的rnn使用時最好去官方文件看一下api是如何使用的:(乙個需要注意的地方是在pytorch中rnn的輸入input的shape的三維分別是 (seq_len, batch, input_size),隱藏層h_0的shape三維分別是 (num_layers * num_directions, batch, hidden_size),輸出output的shape三維分別是 (seq_len, batch, hidden_size * num_directions),這與之前使用的tensorflow和keras將batch作為第一維有點不太一樣。

import torch

import torch.autograd as autograd

import torch.nn as nn

import torch.optim as optim

import torch.nn.functional as f

defprepare_sequence

(seq, to_ix):

idxs = [to_ix[w] for w in seq]

tensor = torch.longtensor(idxs)

return autograd.variable(tensor)

training_data = [

("everybody read that book".split(), ["nn", "v", "det", "nn"])

]word_to_ix = {}

for sent, tags in training_data:

for word in sent:

if word not

in word_to_ix:

word_to_ix[word] = len(word_to_ix)

print(word_to_ix)

tag_to_ix =

embedding_dim = 6

hidden_dim = 6

class

lstmtagger

(nn.module):

def__init__

(self, embedding_dim, hidden_dim, vocab_size, tagset_size):

super(lstmtagger, self).__init__()

self.hidden_dim = hidden_dim

self.word_embeddings = nn.embedding(vocab_size, embedding_dim)

self.lstm = nn.lstm(embedding_dim, hidden_dim)

self.hidden2tag = nn.linear(hidden_dim, tagset_size)

self.hidden = self.init_hidden()

definit_hidden

(self):

return (autograd.variable(torch.zeros(1, 1, self.hidden_dim)),

autograd.variable(torch.zeros(1, 1, self.hidden_dim)))

defforward

(self, sentence):

embeds = self.word_embeddings(sentence)

lstm_out, self.hidden = self.lstm(

embeds.view(len(sentence), 1, -1), self.hidden)

tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))

tag_scores = f.log_softmax(tag_space)

return tag_scores

model = lstmtagger(embedding_dim, hidden_dim, len(word_to_ix), len(tag_to_ix))

loss_function = nn.nllloss()

optimizer = optim.sgd(model.parameters(), lr=0.1)

inputs = prepare_sequence(training_data[0][0], word_to_ix)

print(inputs)

print("inputs size: ", inputs.size())

tag_scores = model(inputs)

print(tag_scores)

print("tag_scores size: ", tag_scores.size())

for epoch in range(300):

for sentence, tags in training_data:

optimizer.zero_grad()

model.hidden = model.init_hidden()

sentence_in = prepare_sequence(sentence, word_to_ix)

targets = prepare_sequence(tags, tag_to_ix)

tag_scores = model(sentence_in)

loss = loss_function(tag_scores, targets)

loss.backward()

optimizer.step()

inputs = prepare_sequence(training_data[0][0], word_to_ix)

tag_scores = model(inputs)

print(tag_scores)

執行結果如下:

variable containing:

0 12 3

4[torch.longtensor of size 5]

inputs size: torch.size([5])

variable containing:

-1.1750 -1.2042 -0.9385

-1.2109 -1.1668 -0.9398

-1.1762 -1.2194 -0.9259

-1.2111 -1.2005 -0.9135

-1.2451 -1.1828 -0.9022

[torch.floattensor of size 5x3]

tag_scores size: torch.size([5, 3])

variable containing:

-0.0832 -4.6391 -2.6573

-6.3608 -0.0345 -3.4355

-2.6776 -2.4210 -0.1714

-0.0497 -6.1473 -3.0711

-6.2093 -0.0339 -3.4624

[torch.floattensor of size 5x3]

深度學習 Pytorch學習筆記(五)

pytorch實現卷積神經網路 執行效率相對於keras慢太多 import torch import warnings import torchvision from torchvision.datasets import mnist from torch.utils.data import da...

pytorch學習筆記五 批訓練

學自莫凡python 一批5個資料 batch size 5 15個資料總共被分成3批訓練 step 3 並將所有資料整體訓練了3遍。1.匯入模組 import torch import torch.utils.data as data data是用來批訓練的模組 2.一批訓練5個資料 batch ...

pytorch學習筆記(五)調參優化

這次的實驗主要是為了針對筆記 三 和筆記 四 上的後續的操作,同時也是為了撰寫高階數字影象處理的 而做理論和資料準備。第一次實驗是在自己設計的乙個7層網路上進行的,2層卷機層,2層池化層,3層全連線層。訓練過程未做任何處理,最後結果在40 包括後來調整了很多引數,最後只能達到下面的效果。網路更換為了...